I am working with the Sangoma support, but I also want to pose this question in the public forums to see if anyone has a idea. I have a large system, 800+ phones, rather robust server, 12 core Xeon, 32 Gig RAM. 1 TB raid, dedicated. It’s using PBXact, running Asterisk 16.
The server has dual nic’s, one faces the internet (through a router) and the other faces the local where all the phones connect to. When I do a wireshark of both interfaces at the same time, I have located and analyzed the RTP streams from the carrier to the system, and then from the system to the phones. What we are experiencing is increased jitter and packet out of sequence when the RTP goes through the server. If I look at a captured call from the carrier, I see the RTP with a low jitter, and proper timestamp, and when I look at the same stream leaving the server on the other nic, I see the jitter much higher and the packet timestamps much more out of sync, and I see the opposite from the phones, good RTP from the phones and when it leaves the carrier side the jitter is higher and timestamp out of order. This is causing call degradation and can make the call hard to hear. Now there does not seem to be any connection to volume, as it happens with 2 active calls and 12 active calls, and the bandwidth, when analyzed only gets to no more 6 MB/s and those are 1GB links so it cannot be the network speed, but it seems like this is a issue with the linux core not giving priority to the RTP streams, or maybe the nic buffers. I did notice that it seems to happen more to established calls when a call comes in and rings multiple phones, i.e. when asterisk has to do a bunch of non-RTP processing. The system load is not excessive, usually staying below 4 during the active day. Does anyone have a idea or suggestion on some linux core adjustments that can give RTP more priority or preference, or increase the buss speeds internally, or the nic buffers, so that the RTP streams can flow through the system without being degraded? Thanks!