I’m new to Asterisk and SIP and trying to get to the bottom of an issue which occurs when sending a wav to a remote system. The audio heard on the remote equipment (I’m not sure what it is) plays but suffers from jitter and fading (i.e. the volume drops momentarily).
The audio is converted from an mp3 file using sox. I’ve tried several codecs (pcm,alaw,ulaw) and also reduced the volume in case of clipping.
It’s running on Linux which isn’t showing signs of high CPU or memory usage. It is running on a VM.
With asterisk shows no errors in the log, but with the rtp debug enabled, I do see occasions when the ‘Sent’ packet doesn’t have a corresponding ‘Got’ packet.
As said above, I’m new to this but am now thinking the problem may be network related?
That’s something that doesn’t happen in digital systems! One might get complete dropouts, or interpolation of missing sections, but digital accurately preserves signal levels. I think you need to look more closely at the received audio and better characterise the problem.
I think we need the actual logs.
Is the VM optimised to ensure it gets CPU within about 20ms.
That’s something that doesn’t happen in digital systems! One might get complete dropouts, or interpolation of missing sections, but digital accurately preserves signal levels. I think you need to look more closely at the received audio and better characterise the problem.
Understood, I thought it might be a automatic gain/normalisation issue. I’ve looked closely at a recording of a message with Audacity. It shows short periods ( less than 10ms) where the samples drop to zero. So possibly I am hearing that as a reduced volume rather than jitter.
So, that would imply (at least to me) that the packets are sent with timing information so lost packets manifest as a gap in the output. Would that be right?
I think we need the actual logs.
I’ll get them, it’s surprisingly difficult to get logs created as a message is being played for logistical reasons.
Is the VM optimised to ensure it gets CPU within about 20ms.
This is interesting, as I can see it being an issue. I’ll find out what hypervisor is in use and see what can be done to optimise it.
The default behaviour of Asterisk is to pass the audio through at the same level as received. I’m not sure if it can even do AGC, but you would have te be very explicit about invoking it, if it did.
I have taken a deeper look into this. I have captured the output with Wireshark. If I use the Telephony feature and decode the RTP packets, I can play the message. If I send an audio file encoded with ulaw (or alaw), Wireshark plays the message fine. I’m, therefore suspicious that problem is with the remote system.
I’ve noted that the documentation for the remote (which goes via a Zenitel ICX router) says the other end expects g722.
If I use ffmpeg to create a g722 message and play it, Wireshark seems to recognise g722 the format but the decoded packets are just noise (the remote plays noise as well). I’m not sure where the issue lies. I’ve (hopefully configured Asterisk to allow g722 and installed a plug in).
My question is, would I expect Asterisk to handle g722? What might I have missed in the configuration?