I have an ARI application running locally on a server with Asterisk. I am looking to improve redundancy and have been looking at the various proxy/message bus setups.
I have done lots of research into the available proxies and have a fair understanding of how they operate. I would like the get a bit of insight so that I can better adapt my existing application to work in this environment.
The existing proxies handle
Stasis event stream
Http command
Http command responses.
I am unsure as to why the Http command/responses go via the proxy. The main purpose I can see for this is that the events + command responses are merged into the same stream. I can imagine that I will need to do some refactoring to handle this.
The question:
Is there any problems that can arise from handling the events via message bus, but using http commands directly? image attached
Can anyone offer insights into the benefit to having all of the events/responses in the same stream? Is there a particular approach design pattern that this helps with?
Edit: The diagram should reflect that any app server could send http commands to any asterisk box directly.
The message bus represents a cluster of something like NATS or RabbitMQ. An ARI application can only have a single websocket connection. The message bus allows more than one consumer of the events from an ARI application. The single websocket connection is actually the single point of failure - this alleviates that problem.
@ldo They’re referring to multiple application instances connecting to Asterisk as the same application name, which is not supported. Only a single connection per application is implemented currently.
Ah, OK. Still, it’s possible to have multiple ARI clients originating calls and listening for the resulting events, each using their own dynamically-generated application name.
One thing I have found is that, having created an ARI application name, there seems to be no API call to delete it.
I appreciate the interest in the post. I’ll refocus the question.
In a case where using an ARI proxy/message bus, what is the benefit of having the event stream, commands and responses all going via the same message bus?
Hello, we are discussing ARI, not AMI, in this context. Currently, we do not use AMI in the module. From what I recall, not all AMI events are present in stasis. For AMI, we have a proxy that is responsible for pushing all AMI events onto the bus. I hope this clarifies your question.
Sylvain
Hi @helixtornado
I’m also trying to understand the ari-proxy, that’s how I reached here.
As of my understanding the purpose of this is to tag newly created resources to the existing stream. This is crucial when you have multiple consumers and you want to process events of different calls simultaneously and a call’s events sequentially.
When you send commands through the proxy app it will map the resource id of the response to the current call. Which will ensure the order of events of a particular call. If you are sending the command directly to the asterisk. The proxy will treat the new resources events as a new call and the order of events will be affected in a calls perspective.
I’m now focusing on a kafka based proxy which have topic partitions - following this will ensure a calls’ all resources’ events will be on one partition, events on a partition can only consume sequentially - which will ensure the order of events.
If you are not caring about the order of events or you are handling it using any-other methods you can send commands directly to the Asterisk. Hope I answered your question.