CMake for a modern build system

Hi folks,

I’m working on a modern CMake-based build system for Asterisk to replace Autotools, in full collaboration with project maintainers.

My goal is not to change behavior or structure, but to improve:

  • Cross-platform support
  • Developer onboarding
  • IDE + CI tooling integration
  • Easier optional module configuration

I’ll be opening PRs in clean stages (core → modules → menuselect) and would love community feedback early.

If you have strong opinions about module toggling, platform support, or CI build ping me!

Regards,
Chris Roy

“Life is a tragedy for feelers and a comedy for thinkers”

1 Like

Oh great, now I won’t be able to build Asterisk anymore on my 1995 Sparcstation!! LOL

Seriously, please stop using “cross-platform support” as reasons for pushing changes like this. You’re going to break as many weird/oddball/border case systems as you include, probably more actually. It’s like claiming “child safety” as a justification for banning a book when the real reason is you just want to keep women barefoot and pregnant.

As long as you don’t break the Raspberri PI people’s build platforms you probably will be OK - they seem to be the noisiest of the border cases

On Sunday 29 June 2025 at 17:24:11, TedM via Asterisk Community wrote:

Seriously, please stop using “cross-platform support” as reasons for
pushing changes like this. You’re going to break as many
weird/oddball/border case systems as you include, probably more actually.

You are definitely entitled to an opinion, but if you want to make a difference
to the project maintainers’ potential decisions, please at least provide some
evidence for your claim.

I would also suggest that:

  1. It’s not the number of platforms which are supported by a build system
    which is important; it’s the number of users of those platforms. If CMake
    adds support for a platform which soon has 100 Asterisk users, but breaks
    support for a platform which has 20 users, I’d still say that’s a net win for
    Asterisk.

  2. The old build system will still work with old platforms, so it’s quite
    likely that nobody’s losing out at all.

Antony.


A committee is a group of people who keep minutes and waste hours.

  • Milton Berle

Oh I definitely agree, plus I will add that it’s the “reasonability” of those platforms that is equally important. For example adding support for a very high power platform like a large server OS that could easily support 10,000 Asterisk extensions with a 20% utilization is far more important than the latest Raspberry Pi that at most could support 50 extensions at a 5% utilization.

I’m just pointing out my experience with FOSS communities is that they are top-heavy with “border condition” members - you will see more activity from 50 people running Asterisk on a Raspberry Pi for their own household, than from the 500 people running Asterisk on a HP Proliant Gen 10 supporting 1000 users - even though the logical thing (in my opinion) is for the Asterisk developers to work on supporting the 500x1000 users than the 50x1 users.
However in the past when I’ve suggested this - I’ve been dogpiled by the 50…

However - “cross-platform” is a FOSS buzzword meaning total number of DIFFERENT platforms. It’s actually a marketing term - and by definition, marketing terms are designed to mislead the ignorant. Please don’t use it as a justification.

It’s not a bug, it’s a feature!

Open-Source projects do not depend for their success on sheer numbers of passive users. Instead, they live and die by the contributions from an active community. This doesn’t just mean development, but also documentation, helping out in support forums (like this one) etc. And perhaps monetary support as well.

So yes, if more active support comes from the Raspberry Pi users than from the HP Prioliant Gen 10 users, then that’s the way the cookie crumbles. If you are one of those HP Proliant Gen 10 users, then it is up to you (and whomever among your fellow users you can gather together) to help keep the project alive on that platform.

The code doesn’t write itself, you know.

To provide clarification on this statement, this is not official work as part of the project or its maintainers.

Ah, no, not exactly. Nowadays the larger Open Source projects all have corporate sponsors. The Linux Foundation for example spends around $200M a year and .5M a year on Linus Torvald’s salary.

The reality is that the Asterisk project went to this model many years ago with Sangoma as the sponsor and its developers do, in fact, exert very significant control over the code. Anyone in the community contributing sort of makes a deal with the devil when they contribute since while the corporate sponsors pay the bills for keeping the lights on, the projects do go towards what those sponsors want. Not ALL development is from the developers paid by those sponsors but project control absolutely is. So - the community CAN contribute - if they want - but their contributions may not, actually, be incorporated in the project - and may in fact be rejected - and scorned by other contributors.

The Gen 10 Proliant is one of many Intel-based servers and Sangoma’s Switchvox product runs on Intel-based CPUs - so no, actually, I don’t have to worry about that part of it, since Switchvox customers are paying for that development. And some of their customers are other businesses (like Grandstream) who pay source license fees to them for Asterisk.

However, I AM one of those chan_sip users and I DID make a contribution to keep that alive. It was a contribution that was, in fact, kept at arm’s length by the Asterisk project. They don’t exactly know what to do with it.

The same thing happened with Dave Burgess (formerly cjnut) who did a lot to contribute towards the development of chan_sccp with the sccp-b development. Once more, the Asterisk project kept that at arm’s length. He’s passed a few years ago but I did contribute a guide for installing that on FreePBX as a proof that it could be installed on Asterisk version 22 - mainly to shut up several ignoramuses on those forums who claimed otherwise.

chan_sip on the other hand, has fared better. Despite Sangoma’s rejection of incorporating that into Asterisk itself, it’s still out there and there’s 2 forks of it that can be reintegrated into the Asterisk source by anyone that wishes. The first fork was created by a user who wanted to preserve it and who then proceeded to add fax control and other custom parameters, it’s located here:

GitHub - InterLinked1/chan_sip: Maintained version of the original chan_sip Asterisk SIP channel driver

The second fork was created - largely due, I think, to my prodding, by the developer of the usecallmanager patch, and it is available here:

GitHub - usecallmanagernz/patches: Patches for Asterisk.

in the Asterisk version 22 patch which incorporates an updated chan_sip

My prodding consisted of patching the first chan_sip fork with the usecallmanager patch to prove it could be done with Asterisk 21, (note that the maintainer of the first chan_sip fork claims it cannot be done) which pretty much presented the usecallmanager patch author with the choice of you either do your own fork of chan_sip for Asterisk 22 or I would release a fork of usecallmanager myself for Asterisk 22 incorporating the first chan_sip patch. After waffling, he apparently decided the first choice was better. LOL

So, in summary, your simplistic statement is pretty much wrong - it does not matter what effort the userbase puts into something - because Asterisk has corporate overlords, that effort can be squashed if those overlords don’t like that the userbase is doing. They don’t, apparently like chan_sip.

Interestingly, other user/developers of Asterisk have TRIED suggesting a compromise/politcal middle ground where chan_sip was repackaged as something like chan_cisco by the usecallmanager patch author so that Asterisk could incorporate it - which is, in fact, precisely what he has done with his latest iteration. However, the Asterisk project’s corporate overlords are resolute in their rejection of chan_sip and will consider no compromise. I rather suspect this is possibly related to their wishing NOT to infuriate the 600 pound Cisco gorilla but that’s just my speculation.

Sangoma/Asterisk could easily squash the Raspberry Pi users no matter what that userbase does in the way of support, if they wanted to. But, they don’t want to - my suspicions are because Sangoma is waiting for the RPi’s hardware releases to become powerful enough to be used as something better than a child’s toy so they might release a sort of Switchvox-Lite based around that to compliment the Intel-based Switchvox. Which has, in fact, been happening over the years within the RPi community however that community is currently becoming fractionalized about it, since they have started to bifurcate, with various Pi versions, some cheaper and less powerful some more expensive and more powerful.

ANYWAY, while these politics I just documented exist due to corporate sponsorship of Asterisk, they are rather small potatoes, compared to the corporate politics that goes on in the Linux Foundation. You may not know this but there have been Windows Desktop lookalikes for X windows for many years. Boomerang was one, and the Zorin OS project is probably the most successful.

But, the response from the “major” Linux distribution developers has been pretty negative. These efforts exist to assist in migrating Windows desktop users over to Linux which every Linux distro claims it wants to do - but when the chips are down - the majors just don’t. It would be easy to incorporate a checkbox into the major distros like Ubuntu Desktop, Debian, etc. to “make the resulting desktop look like Windows” but they won’t do it. It’s all corporate politics - the Linux foundation is funded by Microsoft, among others, and they don’t want to throw the red flag in front of the bull.

The same corporate politics exists with X Windows, check out this facinating read of the politics going on nowadays:

Ubuntu 25.10 and Fedora 43 to drop X11 in GNOME editions • The Register

The statement from that article:

“… According to that new project’s leader, the X.org maintainers have turned down thousands of code changes and improvements to its X11 server in recent years…”

gives lie to your claim that it’s up to me and my fellow users to keep Asterisk or any major FOSS project alive. It’s not. The user effort only counts for part of the effort. Large FOSS projects require full time employees and someone has to pay to put food on the table of those employees, and that money comes from the corporate community who has to preserve their income streams - and so those partners apply pressure to FOSS which results in the rejection of user efforts at times.

It’s an imperfect world. But I WILL point out that ultimately, even with the corporate politics and BS injected into FOSS, the ultimate result IS that basically, those who make the effort get the software “for free, as in free beer” while those who just want to get behind the wheel, turn the key, step on the gas and go - without understanding squat about what makes the engine go - THEY end up having to pay. So, it DOES preserve choice - you can choose to apply yourself and understand how things work on a starship - or you can choose to be lazy, ignorant and get mind-melded against your will.

Let’s keep this discussion on-topic please. Nobody is squashing anything. CMake has been around for 25 years and is, as far as I can tell, universally supported. We have no plans at this point to dump Autotools.

I’d like to see where this investigation goes. The autotools config process is large, ugly and hard to maintain and right now it takes longer to run ./configure than it does to actually run a full compile.

Nobody can squash anything. That’s what the “Free” in “Free software” means. Nobody can stop those Raspberry Pi users/developers from doing what they want with Asterisk, or any other Free software. The fact that the combined marketing might of Intel and Microsoft was incapable of coming up with a credible competitor to the Raspberry Pi should have been a hint to that.

The idea that large megacorporates have some power to dictate the direction of the whole movement is just ridiculous.

My guess is that Intel (50 bn/yr) and Microsoft (250 bn/yr) are ‘capable’ but not interested is such a small niche as Raspberry PI (250 m/yr).

The Raspberry Pi ships about 6 million units per year, and that’s not counting third-party knockoffs. So it’s not peanuts in terms of volume. But the margins are certainly low — deliberately so. It’s produced by a nonprofit foundation, after all. The boards are so cheap they still don’t include a battery-backed-up clock, because that would add too much to the cost.

Intel and Microsoft have tried to muscle in. Microsoft offers, or used to offer, “Windows 10 IOT Edition” (might be Windows 11 by now), which was a pretty pathetic product, incapable of self-hosting its own development stack — you needed a full Windows PC for that.

Intel tried to offer its “NUC” products (which it has now given up on and passed off to Asus). But they were several times the price of a Raspberry Pi (certainly with several times the performance and power consumption, too), so totally aimed at the wrong market.

Getting back to the point…

My one experience with an open-source project that was trying to maintain two parallel build systems was Blender. For a while, they were including build scripts for both SCons and CMake.

I don’t know about anybody else, but the two ways of building the project always felt to me like they were producing two slightly different things, and it was too much trouble to figure out exactly what the differences were. When I submitted a patch to simplify part of the build process, I initially did it for CMake; I was then asked to do it for SCons as well, which I did. But I imagine other contributors might have felt that supporting two build systems was just too much trouble.

In the end, the Blender developers abandoned SCons and moved entirely to CMake.

I never said the “whole FOSS movement”. That is just pure hyperbole on your part. I said larger FOSS projects Nor have I said at any point that the current system is bad, only imperfect.

Intel screwed the pooch with the NUC because they ignored the industry standard mini form factor that Dell, HP, Lenovo, and every other PC maker in the world had worked out. So while those companies were producing mini form factor PCs and everyone out there was buying VESA mounts for monitors to slide their minis behind them, nobody was interested in a NUC that had all of the downsides of a desktop that occupies space and all the downsides of a super small form factor with cooling issues and weird nonstandard power supplies and so on, with no upsides that it would fit a plethora of standardized mounts. The NUC hardware itself was fine, it was merely that Intel’s pride was hurt if they recognized that the PC had become a commodity item. It had nothing to do with any sort of attempt to go after the Pi and everything to do with Grandpa remembering his glory days 30 years ago when people actually paid any attention to what he said and he could “set standards”

Apple also went through that phase where a computer had to look like a table lamp or doorstop or the color of urine but it does appear they finally outgrew it and a Macbook today looks like every other laptop in existence.

But getting back to the point - there’s a reason ./configure takes longer than the build. Back in your grandpappys day there were actual differences between how different Unixes implemented system calls, the point of ./configure was to suss out all that weirdness between different platforms and hide all the differences behind variables which the code then paid attention to. So I would certainly expect that it would take longer to do that than to do the actual build.

But today - well the different Unix, BSD, and Unix-alike/Linux distros are kind of like traveling through different cities in the world today - they all pretty much look mostly the same, in developed countries at any rate.

A case can certainly be made that autotools are solving a problem that mostly does not exist anymore. Probably why the OP decided to dive into this and made the post that kicked this thread off. It will be interesting to see what they come up with. But whether or not it’s adopted is pretty much going to be up to Sangoma, I think…

So did the Raspberry Pi. But the Pi was successful, whereas Intel was not. Care to guess why?

Interestingly, the only official “UNIX®” left is MacOS, and that is the one that is least like the others. In the Blender CMake build scripts, there are quite a few occurrences of “if(UNIX AND NOT APPLE)”, where both Linux and the BSDs are able to comfortably fit under “if(UNIX)”.

Because - gpio pins - for one thing. I have a lawn. I have a sprinkler system. I have all my sprinkler valves wired to a Pi controlled by OpenSprinkler.

Do you know what the equivalent is in the PC world? I’ll tell you. It’s a PC. running a sprinkler control software that talks over the network to a sprinkler controller that is almost certainly built around a Raspberry Pi. Because, PC I/O stinks. And you need I/O to control stuff. We have super expensive things called PLCs they used in rock quarries and on assembly lines that are controlled by PCs. Probably half of them in current operation could be replaced by Pis. And increasingly, more and more people are pulling their heads out on this issue and doing just that.

MacOSis based on Darwin and Darwin is a mix of NEXT Unix and FreeBSD. I don’t know if Steve Jobs ever did buy a source license for NEXT so it may not actually be real UNIX.

Dropping autotools altogether is not a decision that would be taken lightly. While there would be no functional change to Asterisk, it would affect anyone building Asterisk and most especially packagers (including ourselves for FreePBX, etc). The good news is that Asterisk is a mature product and we don’t make changes to the configure.ac script very often so if we did accept CMake as an alternative, we could still keep autotools around for a decent amount of time.

Why do you think a vast American-based firm like Intel, with its huge resources in terms of talent and money, has been unable to come up with such an idea? Why was it left to a comparative pipsqueak of a British nonprofit foundation to show the way forward?

Maybe the world does not revolve quite so critically around these massive corporates as you might assume?

They did come up with that idea, about half a century ago, but the products in question are ones that tend to appear as almost anonymous components to end user. Actually, in such terms, the R Pi is grossly over engineered for many applications. To some extent people design with equipment with which they are familiar, as long as it is cheap enough. The RPi is basically a smart phone, without the phone, display, an touch screen

The primary computers in typical PCs are designed for data processing, not for industrial control applications. The devices with which you probably associate Intel are designed to get very high processing rates. but rely on other devices for dealing with the details of input and output.

Actually, in terms of hobbyist and experimenter use, or business type PCs, one used to use parallel printer ports, but those stopped being included. The fall back used to be serial ports, particularly the modem control lines, but again they are ceasing to appear in modern systems. (Although these were normally implemented as separate components, it does look like the Intel Gemini Lake systems on chips have GPIOs, and there are Linux drivers for them.)

The other problem with the modem control lines on a serial port is a dirty little secret that most USB-to-serial chips don’t actually implement them. Only the “real” serial chips (oxford tech, moschip, etc.) that exist on a PCI card you plug into the computer, do.

Lastly, keep in mind that not all Pi’s come from the pipsqueak British foundation. Regardless of what that foundation claims it wants, there’s all kinds of Pi copies out there that (apparently) get around RPi’s rules through slight changes in their hardware. And those copies are MUCH cheaper than a RPi.

If I was a product manager today of a product that needed a controller I’d reach for one of the Chinese knockoffs as my first choice due to cost. Just as the typical consumer reaches for a Chinese knockoff of a PC (like Lenovo) as their first choice due to cost.

If Sangoma hired me to design a non-rack mounted hardware PBX based on FreePBX I would also reach for one of the “mini” form factor motherboards far away above reaching for an Intel NUC for the same cost reasons. Because not only is there the cost of the motherboard itself - there’s the case I’m going to put it in. If I use the mini form factor then if my supplier goes belly up I can switch suppliers and I don’t have to pay to redesign the case nor are my customers buying the thing any wiser that I changed the supplier. But if Intel gets tired of producing NUCs and I’ve designed my product around it - I’m screwed. I have to start all over again. Because, the NUC ignored the industry standard mini form factor.

THAT’s why the NUC failed. the collective weight of the industry and their mini form factor DID in this case, cause the world to revolve around the industry and ignore Intel. Similarly to how the collective weight of the Chinese rip-offs of the RPi seems to be doing the same thing in that industry.

Oh sorry, if my framing hinted at that. I meant to say that, instead of the forking and setting up a PR, I would like to get all of your inputs and incorporate them into the fork.