Sending Audio Frames to Channel


I am developing an application using Asterisk and have written a C file which reads the audio frames that are input on a channel and stores these frames on an array. This continues untill the user presses ‘#’, when I send the stored audio frames onto the channel using ast_write function. However, I am not able to hear anything on the channel. I was expecting to hear my recording. I am able to hear back on the channel if I just keep sending the audio frames as they come into the channel back into the channel. Is there some catch in doing this?


Further, I have tried the following simple experiment. When I receive an audio frame, instead of sending the current audio frame back into the channel, I send (curr_frame - 100)th frame that I have stored in my array (after 100 frames have been collected). I was hoping that this would take care of any timing issue relating to sending audio frames on the channel too fast. But still, and very strangely so, I continue to hear back what I am saying currently rather than what I said 100 frames earlier. What am I missing here?
Thanks in advance.

I found the problem and the solution that I post for the benefit of all. ast_frame contains a void* pointing to the actual frame data. This is usually a non-malloc’d data, meaning that the pointer points to a fixed point in memory and the contents of the memory keep changing with every new frame. Thus if one attempts to directly store the ast_frame then he is actually not saving the data on the frame. The solution is through use of a function called
struct ast_frame* frisolate(ast_frame* f) which creates a copy of the ast_frame and copies the non-malloc’d elements of the ast_frame such as data. Now if one stores the output of this function, then one is actually storing the frame data. I could fix my problem using this.

I think much easier will be to store frames in file - same way as in dictate / record. Then playback. Of course you will store modified frames.

That causes a read/write delay that we want to avoid and hence want to keep all the information in the memory itself.
I am able to hear my voice when I playback the frames when a new frame comes in. However, if I attempt to playback this collated array of frames by putting it inside a for-loop going from 0 to the number of frames and doing ast_write(array[i]), I am not able to hear anything. Is there a timing that has to be observed when writing back these frames into the channel? Or should I use any other function?

Thanks - Harshat

I rewrote chan_alsa for one project. There I stored frames, resized them and then sent new frames. Hardware required bigger frames.
There I used timings from hardware - meaning writing frame there or reading frame took very precise time. And I synchronised that way.
I think you can synchronise on incoming frames.

But you still suggest that I use ast_write function? I mean is there no other function that takes care of this timing requirement? What is chan_alsa used for? Sorry for all the questions, I am new to Asterisk. Thanks

Sorry - I checked my code. I used snd_pcm_readi/snd_pcm_writei - but not ast_read/ast_write
Alsa - channel for hardware speaker/microphone
I can suggest only to take a look how Asterisk implements PlayBack.
You do playback - but not from file and from buffer.
Or you can check Echo - how it is done.