Sangoma v Digium


#1

Putting together a 4-8 E1 box and wondering whether we should be considering Sangoma. Most people seem to think that Digiums are badly designed for serious traffic, and that putting more than 1 card into a single chassis can lead to instability.

I was wondering if the Sangoma (or even Aculab) cards might be any better?

Also, are the benefits of on-card DSPs running echo-cancellation really worth having? Experiences most welcome…


#2

the whole number of cards you can put into a machine is very dependent on what you are doing. And in recent benchmarks preformed by Matt Florell he discovered that digium cards handled better under serious traffic. However I must admit I am heavily biased.


#3

The difference in performance was actually 2-5% in favor of Digium, so it was not a significant difference. Also, this was only on a system with a single quad T1 card.

As for the echo-cancellation cards, they do reduce the CPU load on the machine by 10-20% (it’s about the same for Digium and Sangoma models)

I have had quite an interesting(frustrating) experience with two Digium TE405P quad cards in a single machine. LONG STORY FOLLOWS:

I put two quad T1 digium cards in a server and had periodic issues like recordings randomly not working when bridged across cards and the occasional crash. Then after about 5 weeks, the PCI slot of the second card died, it just went dead and nothing else ran in it(both quad cards worked fine). So figuring that the motherboard was bad I put in a brand new motherboard and things went the same for about 5 weeks again and it happened again, this time I had switched the cards and it was again the card in the second slot whose PCI slot died. I then decided to just move the card to the fourth slot and that ran ok for a few weeks then that slot died.

Then I got another new motherboard of a different model and again the second slot died after a few weeks. I then moved the second card to the 4th slot again and called Digium support since I knew it was definately not a motherboard problem. They logged into it and said everything was fine and if it happend again I whould get another motherboard. After 3 weeks the 4th slot died.

Then I got yet another new motherboard(the 4th in this story for those of you not counting) of a another different model, swapped out the two TE405P cards from other production servers so everything was new in this machine and again the second slot died after a few weeks, then I replaced both cards in that system with Sangoma a104u cards(one in the 3rd and 4th slots) and they have been running fine on that same motherboard for the last 9 months.

As a side note, I put both of the Digium quad cards in other production servers by themselves and they are both running as well to this day.

I have no idea what caused this, but it was reproducable and calling Digium tech support lead to no solution so I went with Sangoma cards and my system runs fine now.

I know that Mark hates it whenever I mention that something about Digium hardware might not be perfect and that in doing so I am “hurting Asterisk”, but it does need to be said. I have heard from many other people who have had problems like this with multiple Digium cards in a single server, and I have also heard from several other people who have upto 3 quad Digium cards in a single server with no issues, so your experience may differ.

You need to keep in mind that Sangoma cards do have a 5 year warranty and Digium cards have a 2 year warranty. Also, I had a T400P go bad 2 years and 3 months after I bought it and Digium would not offer a replacement or even any trade-in value for it on a new card.


#4

Wow…

That sounds ilke a seriously stressful story, but thanks for sharing it.

Surely there must be others with dual 4-port cards - it would be interesting to hear from them. Seems to me that it should be cleared up whether Digium cards have a problem or not when used in multi-card setups. I’ve certainly heard from other sources that multiple card systems can behave strangely, although I’ve not come across stories of trashed mobo’s before.


#5

Multiple 4 port cards are bound to cause weird behaviour due to the shear amount of interupts a second if you havent got a high quality motherboard (i.e each PCI slot on an independant channel on the PCB with a decent I/O Controller). Ive always found its best to cluster rather than put extra cards in. All depends on the situation really.


#6

ChrisUK is right, it is better to go with clusters of slightly cheaper machines, you will have more redundancy, save money and have an infrastructure that can grow easier than it would with just one large server.

After my dual-4port-card experience I revamped part of our infrastructure so that we would never have to have a dual-card system again, and we are better for it, performanc is now not an issue since we just add another single P4 Asterisk server if we need more capacity.


#7

Ah, this is sounding very sensible.

You talk of a single P4 server managing a 4port PRI card (I like the sound of this - OK, it’s more expensive on rackspace, but we’ve got a good deal on that).

Going back to my spec of 120 simultaneous SIP-PRI + 200 SIP-SIP, how much leftover capacity is there for simple SIP-SIP traffic (i.e. with no transcoding?) once you’re at 100% of PRI capacity.


#8

Your capacity depends greatly on what kind of traffic you have going on. straight SIP->SIP with no transcoding can actually run natively and take very little in the way of resources per conversation, and regular T1 termination does not usually overload a system either, as for both at the same time, I would split between 2 servers myself.

In the end it’s very hard to estimate load, you really have to see how your specific traffic affects the server load, that’s pretty much how it is for everyone with Asterisk.

Our P4 servers are usually at least 3.4GHz Prescott with 2MB L2 cache and at least 2GB DDR2 RAM


#9

Well, I think we’ll go 3.2GHz 1MB L2 (our proposed mobo is Skt478) and try and do some tests and see which bit starts glowing first.

Actually, if anyone here has done any capacity tests that they’ve found useful, I’d really like to hear about them. I’m proposing that we set up a looping test - so SIP in goes out on one PRI port, back in on another and is then forwarded around so that it results in, perhaps 30 calls. Then we can do 4 of these simultaneously and measure setup speeds. I presume it will be the call setup that gives the load and that once connected, there’s very little work to do.


#10

…and we’ll go with Digium PRI cards for now (though I’ve a nagging doubt that Sangoma are possibly more robust!).

Can anyone recommend a UK supplier for a couple of TE405P’s? I think they all charge much the same, but if anyone here thinks any are more deserving of the business than others, then I’ll give them the business.


#11

Hi,

Here is a UK supplier of sangoma cards, and most models are usually in stock.

Also they are the Sangoma warranty center in the UK and will work on warranty issues of cards purchased elsewhere also.

voipcomponents.co.uk/index.php

If you are planning anything serious its best to ensure your warranty issues are sorted.


#12

Also alex, call me for a few tips on building a cluster with robust yet affordable hardware.


#13

http://www.keison.co.uk/digium/digium.htm

Bought £3000 of stuff of them great service next day delivery :smiley:.