I am using PHP agi to handle calls. The entire call logic happens in the AGI. I have been noticing a lot of defunct agi’s. I did a bit of testing and I found something interesting. If there is only one call there is no issue. Once I have two or more calls that ar using the same AGI. When any call hangs up (so long as there is one active) any of the php agi’s show as defunct. When the last call using this agi ends then magically all defunct processes go away. I spoke to developers smarter than me and they seem to think that there is cleanup happening in Asterisk only when the last call goes away that free’s up all the other FD’s. Does this seem like a bug in Asterisk?
EDIT: This does not happen with all my AGI’s. A simple $agi->verbose(“hello world”); sleep(90); does not seem to have this issue. I am going to take the AGI that has the issue and slowly copy it over to hello world to see where it breaks.
May be simpler to just use FastAGI. That way you have a single server process handling all the AGI requests, instead of spawning a new process for each one. There’s a reason it’s called “fast”, after all.
He says the whole call logic is in the AGI. In that case even FastAGI would need to fork sub-processes, otherwise you could only handle one call at a time, although you would have more control over the cleanup for them.
I’d have to question why they are doing the sequencing in AGI.
There is no need to fork subprocesses to handle multiple connections, when you have
I have a quick-and-dirty example using Python
asyncio in the
fastagi_try_async script in this repo.
You then have to implement explicit state machines. I rather suspect that the reason the OP is using AGI for sequencing is that they are more comfortable working with a traditional procedural language than with dialplan, albeit that is actually a procedural language. I suspect having to use explicit state variables, rather than the implicit, program counter one, may well destroy that advantage.
No you don’t. Look at my example code and you’ll see. It creates an async task to handle each connection, and the code for the task is structured linearly, instead of being broken up into callbacks like you’re thinking. It looks like threading, but since preemption only happens at explicit
await-points, there is much less chance of trouble from race conditions.