CPU Question

Hi Guys,

Just playing with conferencing which I understand is cpu intensive.

When I make a few calls to the conf, CPU utilisation by Asterisk reaches 7%. Codec in use is GSM.

I am experiencing is delays in voice prompts and messages. Also, for the first caller, MOH stops when the prompts are playing to subsequent callers.

Does this sound like lack of resources?

This is not a major problem, as I am just testing but would be greatful of any comments.

thanks

sean

Hi

Do you have a zaptel card in the system?
Also when a conf is active run zttest and see what you get

Ian

Is this a physical machine or are your running in a virtual machine (VMware etc)? This sounds like the classic Asterisk in a VM issue.

If physical how much ram do you have, how much physical is used vs paged? What about the disk subsystem, do you have high io wait times or long disk queues?

Hi Guys,

There is no card, I am using ztdummy.

The results of zttest show average 99.8888

It is not VMware, it is a little low power (11W) PC with Centos 4.5 installed. The ram is 1Gb and the cpu is

[root@localhost zaptel-1.4.5.1]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 14
model name : Intel® Celeron® CPU 215 @ 1.33GHz
stepping : 8
cpu MHz : 1333.798
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx constant_tsc pni monitor tm2 xtpr
bogomips : 2667.94

The swap file is 0k

comments appreciated.

Thanks
Sean

That might have been better written

The swap file is zero k.

As an example, if I say my name and then press the pound key it is exactly 8 seconds to ‘thank you’.

The odd thing is the delay is exactly the same each time!

Could it a timing source problem?

Hi

Theres the problem

on a meaty server you can get away with small conferences but as the conference sets bigger you will get problems.

Ian

Thanks again Ian!