Hang up if no moh streaming?

Definitely not breaching any performing rights. We stream our services on YouTube, Facebook, and Sermon Audio. The music used (congregational singing only) is public domain.

Playback is interesting. Nobody has ever suggested that before. The only information I can find on Playback is here. It appears that it doesn’t play a sound file, but it’s a robot voice that speaks the words you put in the brackets? Is that correct?

That’s not true, and it is one of the most used applications, so you should really know about it. This is the primary documentation:

https://wiki.asterisk.org/wiki/display/AST/Asterisk+18+Application_Playback

Unfortunately, when used on an external resource, which seems to be missing from the above, it appears to be store and forward, not streaming.

https://wiki.asterisk.org/wiki/display/AST/Asterisk+14+Project+-+URI+Media+Playback

I don’t know if it will accept a named pipe as a file, or a symbolic link to /dev/fd/xxxx, to allow a pipe to be set up as input. You will presumably need a, simple, raw file, so that no seeking is required.

You also really need to understand the conventions used to describe simple programming language (and that of commands). This use of square brackets long predates the internet, let alone VoIP. The brackets are meta characters used to indicate optional content. There are variations on the notation used for different software, but Asterisk notation is close to the standard Unix notation, and the use of square brackets is very ancient, going back well over 50 years. (The limit to my checking this is likely to be set by the lack of internet resources that are that old.)

There’s also the AudioSocket: Home - Asterisk Documentation

I think the github page shows how to use it, but some work might still be necessary. I haven’t looked at it myself so far, but it’s probably worth an effort.

The browser is not mpg123. I am not aware that you have shown us how you call mpg123. There are some subtle details that relate to the usage of a console and that can make trouble when everything is supposed to run in the background.

I figured out a happy medium. Still doesn’t do exactly what I want, but it’s still not too bad. In my dialplan, I’m utilizing ExecIfTime. Here is an example in extensions.conf:

[from-siptrunk]
exten => inbound-calls,Verbose(1,Playing sound.)
same => n,Answer
same => n,ExecIfTime(20:00-23:59,sun,,?Playback(silence/1&please-try-call-later&silence/1))
same => n,ExecIfTime(0:00-23:59,mon-sat,,?Playback(silence/1&please-try-call-later&silence/1))
same => n,ExecIfTime(0:00-10:54,sun,,?Playback(silence/1&please-try-call-later&silence/1))
same => n,ExecIfTime(10:55-13:10,sun,,?MusicOnHold(ulawstream))
same => n,ExecIfTime(13:11-17:54,sun,,?Playback(silence/1&please-try-call-later&silence/1))
same => n,ExecIfTime(17:55-19:59,sun,,?MusicOnHold(ulawstream))
same => n,Hangup()

So what is going on here is that from 10:55 am to 1:10 pm & 5:55 pm to 7:59 pm MusicOnHold (the livestream) will play. All other times that someone calls in, they will hear the message please-try-call-later.gsm and Asterisk will then hang up. I figured out that between the brackets for Playback(), you choose a .gsm file (except you don’t put .gsm at the end) that is located in /var/lib/asterisk/sounds/en. I plan to create my own gsm sound file and use that instead (ie, I will tell the caller when to call back).

So the only thing I would like to do is have Asterisk hang up when the time slot for ExecIfTime ends, but I haven’t figured that out yet. I tested it, and Asterisk says it hung up, but it didn’t (I’m still connected by phone).

Additionally, I had to set the proper time zone on my GCP compute instance. I did this by first choosing my appropriate time zone, and then (assuming something like America/New_York):

sudo timedatectl set-timezone America/New_York
And to check to make sure the time zone is correct:
timedatectl

You should record in the ‘highest quality’ encoding you can. Usually this will be PCM/WAV. Then, encode different files (same name different file types) for each codec in use by your callers. This will minimize/eliminate ‘call time’ transcoding which just wastes CPU cycles and disk accesses.

1 Like

I guess GSM might work for unaccompanied solo, or unison, vocals, but you should but you should not even include it in your list of codecs for any other type of music. GSM is a voice only codec, intended only for speech. G.711 (A-law or mu-law, dependent on country), or G.722 would be the only codecs that might make sense when presenting music to PSTN users. G.722 may not have wide support.

:slight_smile:

(filler to get to the 20 character post minimum.)

Yes. The plan is to make a voice-only message (no music). I just want to tell the caller to call between specific times on Sunday.

Can you further elaborate? So record in WAV, but then encode them as what? MP3? AAC? FLAC? But how can Playback play those files? I thought Playback only plays GSM and from the /var/lib/asterisk/sounds/en folder?

Or do you mean somehow encoding into G.711 and G.722? If so, I would have a G.711 GSM file and a G.722 GSM file? But how would I have two files of the same name and different codecs?

I’m just kinda confused by what you’re talking about and this is all new to me.

The comment assumed you were setting up sensible codecs for production use, which, except for a low wage area call centre are going to include G.711 or better. In that case you would would want the recording in Asterisk .wav format, i.e. 8kHz, 16 bit mono, signed linear, PCM. If you were using G.722, you would need 16kHz audio, but Asterisk would require that in raw format (slin16).

I still don’t understand why you would want to use GSM, even if this is only a proof of concept system. I would have used the codec appropriate for the production system.

G.711 and GSM are different codecs. G.711 GSM does not make sense.

You can have multiple formats of file and Asterisk will choose the easiest one to use.

MP3 is a lossy conversion, so there is no point going from linear PCM (.wav) to MP3 to G.711, as you will get a more faithful reproduction by going directly from linear PCM to G.711.

1 Like

I don’t know how to playback a file (using Playback) that is not .gsm extension in the /var/lib/asterisk/sounds/en folder.

The answer is you don’t, since you don’t specify the file type when you ask Asterisk to play a file. Asterisk automagically chooses the appropriate file for you.

I’m not an expert in the minutiae, but this is my understanding…

When requested to play a file, Asterisk will look for a file with encoding (assumed based on file type) that matches the negotiated channel codec. Failing that, Asterisk will choose the file with the lowest cost translation path from available files to the negotiated channel codec. If no translation path exists, the request fails.

For example, if you have:

/var/lib/asterisk/sounds/en/demo-congrats.ulaw

and the channel codec is gsm, Asterisk will translate the audio samples to SLIN (think ‘internal format that is easy for Asterisk to work with’), then translate the samples to gsm, and then play the samples. This translating is a ‘CPU cycle expensive’ operation and will be repeated every time the file is played, for every caller. Over and over.

If you have:

/var/lib/asterisk/sounds/en/demo-congrats.gsm
/var/lib/asterisk/sounds/en/demo-congrats.ulaw

and the channel codec is gsm, Asterisk will choose demo-congrats.gsm and no CPU cycles will be wasted on translation.

Thus, if you’re concerned with efficiency, you will want to transcode your files into each codec that your callers use.

Asterisk has a CLI command (‘file convert’) to transcode a file, but I prefer to use the Linux command ‘sox.’

Personally, I keep all my files in WAV format since it’s the ‘lingua franca’ for utilities. Since my current gig is not a high volume system and all my callers use ULAW and the translation cost from PCM to ULAW is low to non-existent, I don’t bother with transcoding to other codecs. In high volume systems where every cycle counts, I’d transcode to all codecs in use.

1 Like

A caution. Asterisk interprets .WAV as meaning a .wav (RIFF) file containing GSM encoded audio, so you want .wav. Also, .wav has to be 8kHz, mono, 16 bit, signed linear, PCM, not any arbitrary .wav file.

I meant WAV as the file format, not the file type :slight_smile:

For a correctly formatted .wav, the ‘file’ command will display something like:

/var/lib/asterisk/sounds/en/demo-congrats.wav:  RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 8000 Hz

Is .WAV containing GSM used anywhere outside of Asterisk?

It wouldn’t have been used in Asterisk if that weren’t the case.

sox can do so, see [Freeswitch-users] .wav vs .gsm file sizes for recording calls.

I imagine real Microsoft audio programs that support GSM will write RIFF codec 49 files rather than raw GSM files. Looking at Sound Recorder (Windows) - Wikipedia it looks like early versions of Windows sound recorder supported it (current ones don’t use .wav).

I assume the support exists for the benefit of people who are primarily Windows users.

So…while I completely agree with you (I’m actually an audio guy getting in to phones, oh the irony); I’m also pragmatic enough to say the loss is not going to be enough to notice. Under 4khz (8khz sample rate), you don’t really hear all the lossy horrors MP3 brings to the table. Even if you pre-filter it’s bandwidth to 4khz response and encode it at a more valid higher rate (layer 3 technically doesn’t support 8khz rate); the amount of audible loss isn’t going to be enough to be noticeable on a phone. Most people will probably be calling from a cell phone…which will further encode it to something. On one hand even I would make the argument all the more reason to not use mp3 in the middle…but even VoLTE codecs sound like crap to the point if it’s easier for someone to use mp3, it’s not a big deal.

I’ll also point out that my three MOH streams are actually 16khz q3 OGG files pre-processed and pre-filtered to 4khz. Ultimately…mplayer resamples these to 8khz ulaw for Asterisk. I could not actually hear much of an audible difference between my lossy source and pure ulaw.

Your understanding is the same as mine. Asterisk chooses the extension automatically based on the call environment; so if the provider and all your phones are using ulaw (which is pretty standard)…then it automatically looks for .ulaw files.

The question is why is threehappypenguins’ PBX setup for GSM? Bad configuration? Lousy provider?

Very rarely. There’s full-rate GSM (6.10) built in to Windows. The last time I ever had a reason to encode something was to the AMR version of GSM…and that was to just play around with sending audio files on the old Tmobile Sidekick devices.

Anything that sticks to the WAV specification should. Goldwave for example wrote the format as 49 (0x31) at offset 20…just as expected. However Goldwave only has access to the GSM 6.10 codec that comes with Windows…still.

It’s technically not a “file format”. The “WAVE audio” portion refers to the fact it’s identified in RIFF specification as sampled audio. File type in this case is file format; .wav files follow a specific format for the header and data storage; it doesn’t specifically denote what format of samples are stored.

1 Like

Thank you, @sedwards for the excellent explanation!

I apologize for the noob question… but what do you mean I have a PBX setup for GSM? I know what a PBX is (my asterisk server), but how is it “setup for GSM”? The only things I know how to do with asterisk is edit the .conf files to register a sip provider, and route callers with the dial plan (along with having them listen to moh).