Asterisk Call Delay

I have a strange issue when some calls have 10-15 seconds until Asterisk is sending INVITE to the peer.

The problem is not permanently, few calls are working fine, and then 1-2 calls are with delay.
For ex. one call with delay:

CLI> pjsip show history

00003 1552335791 * <== 192.168.2.200:56089 INVITE sip:11@192.168.2.1 SIP/2.0
00004 1552335791 * ==> 192.168.2.200:56089 SIP/2.0 100 Trying
00009 1552335800 * ==> 192.168.2.200:14106 INVITE sip:11@192.168.2.200:14106 SIP/2.0
00010 1552335800 * <== 192.168.2.200:14106 SIP/2.0 100 Trying
00011 1552335800 * <== 192.168.2.200:14106 SIP/2.0 180 Ringing
00012 1552335800 * ==> 192.168.2.200:56089 SIP/2.0 180 Ringing

As you can see there is 9 seconds delay, until * is sending INVITE to the peer. This is what I get in debug logs (IP Address and domain are obfuscated):

[2019-03-11 20:23:11.160] DEBUG[13500] res_pjsip_session.c: Negotiating incoming SDP media stream ‘audio’ using audio SDP handler
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘192.168.2.250’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘192.168.2.250’ and port ‘’.
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘0.0.0.0’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘0.0.0.0’ and port ‘’.
[2019-03-11 20:23:11.160] DEBUG[13500] res_pjsip_sdp_rtp.c: Transport transport-udp bound to 0.0.0.0: Using it for RTP media.
[2019-03-11 20:23:11.160] DEBUG[13500] rtp_engine.c: Using engine ‘asterisk’ for RTP instance ‘0x7faf0c06fb70’
[2019-03-11 20:23:11.160] DEBUG[13500] res_rtp_asterisk.c: Allocated port 14788 for RTP instance ‘0x7faf0c06fb70’
[2019-03-11 20:23:11.160] DEBUG[13500] res_rtp_asterisk.c: Creating ICE session 0.0.0.0:14788 (14788) for RTP instance ‘0x7faf0c06fb70’
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘192.168.2.1’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘192.168.2.1’ and port ‘’.
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘192.168.2.1’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘192.168.2.1’ and port ‘’.
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘1.2.3.4’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘1.2.3.4’ and port ‘’.
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: Splitting ‘1.2.3.4’ into…
[2019-03-11 20:23:11.160] DEBUG[13500] netsock2.c: …host ‘1.2.3.4’ and port ‘’.
[2019-03-11 20:23:20.161] DEBUG[13500] rtp_engine.c: RTP instance ‘0x7faf0c06fb70’ is setup and ready to go
[2019-03-11 20:23:20.161] DEBUG[13500] netsock2.c: Splitting ‘pbx.company.com’ into…
[2019-03-11 20:23:20.161] DEBUG[13500] netsock2.c: …host ‘pbx.company.com’ and port ‘’.
[2019-03-11 20:23:20.163] DEBUG[13500] acl.c: Multiple addresses. Using the first only
[2019-03-11 20:23:20.163] DEBUG[13500] res_rtp_asterisk.c: Setup RTCP on RTP instance ‘0x7faf0c06fb70’
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Setting tx payload type 8 based on m type on 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Setting tx payload type 0 based on m type on 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Crossover copying tx to rx payload mapping 0 (0x7faf0c064018) from 0x7faf423ec2b0 to 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Crossover copying tx to rx payload mapping 8 (0x7faf0c0125a8) from 0x7faf423ec2b0 to 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Crossover copying tx to rx payload mapping 101 (0x7faf0c064068) from 0x7faf423ec2b0 to 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Crossover copying tx to rx payload mapping 123 (0x7faf0c0640b8) from 0x7faf423ec2b0 to 0x7faf423ec2b0
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying rx payload mapping 0 (0x7faf0c064018) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying rx payload mapping 8 (0x7faf0c0125a8) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying rx payload mapping 101 (0x7faf0c064068) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying rx payload mapping 123 (0x7faf0c0640b8) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying tx payload mapping 0 (0x7faf0c064018) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying tx payload mapping 8 (0x7faf0c0125a8) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying tx payload mapping 101 (0x7faf0c064068) from 0x7faf423ec2b0 to 0x7faf0c06fd48
[2019-03-11 20:23:20.163] DEBUG[13500] rtp_engine.c: Copying tx payload mapping 123 (0x7faf0c0640b8) from 0x7faf423ec2b0 to 0x7faf0c06fd48

I’m using pjsip transport and UDP protocol. All peers are saved in realtime database.
It’s something wrong with rtp engine?

pjsip show endpoint 16
rtp_engine : asterisk

pbx1*CLI> core show settings

PBX Core settings
-----------------
Version:                     16.2.1
Build Options:               BUILD_NATIVE, OPTIONAL_API

Thank you.

Most likely you don’t have DNS set up to either reverse resolve 1.2.3.4, or rejecting it as unresolveable.

I tried few times by using dig -x 1.2.3.4, and it resolved permanently very fast.

Did it resolve very fast?

yes, it was immediately:

[admin@dev ~]$ time dig -x 1.2.3.4

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;4.3.2.1.in-addr.arpa. IN PTR

;; ANSWER SECTION:
4.3.2.1.in-addr.arpa. 299 IN PTR pbx1.mycompany.com.

;; Query time: 253 msec
;; SERVER: 192.168.50.115#53(192.168.50.115)
;; WHEN: Tue Mar 12 11:43:37 GMT 2019
;; MSG SIZE rcvd: 91

real 0m0.272s
user 0m0.006s
sys 0m0.014s

and backward as well:

[admin@dev ~]$ time dig pbx1.mycompany.com.

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;pbx1.mycompany.com. IN A

;; ANSWER SECTION:
pbx1.mycompany.com. 86400 IN A 1.2.3.4

;; Query time: 0 msec
;; SERVER: 192.168.50.115#53(192.168.50.115)
;; WHEN: Tue Mar 12 11:52:38 GMT 2019
;; MSG SIZE rcvd: 149

real 0m0.023s
user 0m0.008s
sys 0m0.015s

The problem for me is that the only thing that can normally cause a delay at that point is a DNS timeout.