I’m building an AI-powered agent and have the necessary skills in Python and Asterisk. However, I’m uncertain if using AudioSocket is the best approach for my application.
Requirements:
- Use AMI to execute Asterisk commands via my engine like make calls.
- Establish a channel-based connection between my engine and Asterisk, such as WebSocket, RTP, or TCP.
- The system workflow involves:
- Engine sending synthesized audio (chunks) to Asterisk.
- Asterisk transmitting audio (chunks) back to the engine for transcription.
- An LLM (Large Language Model) managing the interaction in between.
Question:
What is the best approach to integrate my system with Asterisk, considering it currently works with Twilio? Should I proceed with AudioSocket, or is there a more suitable option?