[FIXED] CPU usage VERY HIGH on a single core

I have an Asterisk 11 server (AsteriskNOW installed without FreePBX, based on CentOS 6) running as a VM with 12 cores. The server has been updated with yum update to have the latest Asterisk and CentOS 6.6 updates.

It is currently running around 1000 simultaneous SIP-channels that are evenly spread on 200 different ConfBridge conferences. 60% of them use G.711 (SIP-trunks from ordinary phones) and 40% of them G.722 (SIP-phones).

My problem is that using htop, top or mpstat i see a single CPU core being very loaded, but not the others, like if there was a single thread using a lot of CPU. But if i change htop or top to look at threads instead of processes, then there is no single thread using more than a few % CPU.
Another strange thing is that Load Average shows very low values.

See the screenshot of htop below (PID: 2270 is the complete Asterisk process, and the asterisk PID’s below it are all the threads it consists of)

I am wondering:
[ul]Is the info about one CPU core being very loaded correct? What will happen when it reaches 100%?[/ul]
[ul]If the info is correct, why does Asterisk/CentOS not move some of the load to other CPU’s so the load is even, when you can clearly see that there is not a single thread using alot of CPU?[/ul]
[ul]Why is Load Average so low when you can clearly see that the Asterisk process uses 432% CPU? - Load Average should show something close to 4.0-4.5[/ul]

To test if Load Average was working correctly i have tried to create a bash loop script to put 100% CPU load on one core, and then run the script multible times simultanously. If i run the script twice to put 100% load on 2 CPU cores then Load Average shows 2.0 correctly.

Can anybody explain this, or give me some insights?

A similar problem after the upgrade to asterisk11-11.17.0

FreeBSD pbx 10.1-RELEASE-p9 FreeBSD 10.1-RELEASE-p9 #0: Tue Apr 7 01:09:46 UTC 2015 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64

[code]last pid: 34196; load averages: 1.10, 1.15, 1.08 up 0+02:02:03 23:47:39
25 processes: 2 running, 23 sleeping
CPU: 10.7% user, 0.0% nice, 41.4% system, 0.0% interrupt, 47.9% idle
Mem: 151M Active, 451M Inact, 162M Wired, 202M Buf, 1200M Free
Swap: 4096M Total, 4096M Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
33945 asterisk 40 103 0 585M 58908K CPU0 0 40:58 100.00% asterisk[/code]

Scheduling of work to CPUs is done by the OS, not by Asterisk.

It seems to be related to the NIC only using CPU0.
When looking in /proc/interrupts its clear that the IRQ that eth0 uses only runs on CPU0 even though irqbalancer is running.

After switching from E1000 vNIC (which only uses one CPU), to VMXNET3 vNIC the load from the NIC is now spread out over 8 CPU cores.