1) How does Playback application work on deep level?
For example: if audio is in alaw format has a duration of 120 seconds, and about 500 calls launching Playback application to the same audio at once - Does asterisk save this file to his cache (RAM) and playing audio from it? Or it’s playing audio directly from disk storage frame by frame like an incoming stream and uses a large count of read operations of the disk? Will the virtual RAM storage with this audio improve the performance? Or are there any better options to prevent disk read blocks (actually remote disks and network usage)?
2) What is the best option to improve performance when asterisk needs to report information when the call is made?
For example we launched 1000 calls simultaneously and we have 2 ways to get the info after the call: A) via 1 connected AMI event listener on remote machine and compute results B) hangup handlers in dialplan and launch CURL application on asterisk machine with TASK SPOOLER (queued commands, which execute one by one). Curl will report to call initiator remote machine about ended call like: curl -X POST -F ‘id=id’ -F ‘status=ANSWER’ -F ‘duration=43’ http://remote.initiator/api
I am really afraid that option A will eat more resources from asterisk machine and has unexpected limits
3) What does this error mean in deep level: app_dial.c: Unable to write frametype: 2
I have no errors in log files. Only writing to logs NOTICE,WARNING,ERROR messages to save performance. Just a lot of warning messages as above. This warning shows only if call is canceled. I can’t ignore them - I want to understand why it happens when asterisk CANCELs calls correctly? All SIP confirmations to CANCEL message correctly received from provider, but WARNING message is created. Frametype 2 it is a VOICE? Can you give me some details and I can fix this WARNING by myself. If it is a VOICE - why does asterisk try to pass voice on the not answered channel when it sends CANCEL message? Is it related to early media (ringing rtp sounds from provider)?
Great thanks for your answers!!!
PS. I’m using the old version of asterisk with chan_sip
Neither of the above. It reads using stdio reads, which, the OS then fulfils by a buffer per stream backed by calls to the read() system call, to replenish. read() reads from a cache maintained by the OS, in free system memory (i.e it isn’t accounted to Asterisk and doesn’t care which process is reading it.
Not a good idea. Also I don’t know how the OS handles cache consistency, and it may vary between NFS and SMB. It may be forced to re-read every time because it cannot know whether the file has changed. (It looks like NFS clients should RAM cache and should monitor the file modification time about every minute to decide whether to flush the cache.) If you didn’t mean NFS or SMB, please explain what you meant.
Not a good idea. If nothing else, your ITSP may not like you. If you are going to do things like this, you need to discuss them with the ITSP.
This avoids 1,000s of process launches, so is obviously better.
It is voice. No one else would care about the exact reasons. I would guess that, whilst the call is being set up, it is throwing away media it receives, silently, but, when the channel is being closed down, a higher level of the code detects that the media cannot be output. In general one ignores warnings unless there is evidence of a failure. I imagine you would have to modify the source code. Obviously deleting the line that does the logging would suppress the message, but that might also suppress it when there was a real problem. I’m not going to delve into the code to give a more precise answer.
Neither of the above. It reads using stdio reads, which, the OS then fulfils by a buffer per stream backed by calls to the read() system call, to replenish. read() reads from a cache maintained by the OS, in free system memory (i.e it isn’t accounted to Asterisk and doesn’t care which process is reading it.
So what if 500 calls will launch one audio at once? Best option to avoid bottlenecks?
Not a good idea. Also I don’t know how the OS handles cache consistency, and it may vary between NFS and SMB. It may be forced to re-read every time because it cannot know whether the file has changed. (It looks like NFS clients should RAM cache and should monitor the file modification time about every minute to decide whether to flush the cache.) If you didn’t mean NFS or SMB, please explain what you meant.
Sure. It is safer to first copy the necessary file to the local machine from NFS (SMB) storage and use Playback application (or other) to this local file.
Not a good idea. If nothing else, your ITSP may not like you. If you are going to do things like this, you need to discuss them with the ITSP.
ITSP wants money The only condition from ITSP is CPS limit 25 and network bandwidth 1000 Mbps. CPS well configured by dialplan in asterisk or by some sip proxy between asterisk and provider.
This avoids 1,000s of process launches, so is obviously better.
Even if curl launches one by one (queued)? A lot of data will be sent by AMI (events) simultaneously. I hope asterisk can handle it.
When you setup the AMI connection, you have the option to filter what events you want to receive, this will limit the data that’s sent on the connection, to nothing more than what you need. (Or at least close to only what you need, I don’t remember if the filter is by event class, or can be set to single events)