logo
Manhattan Metric
Software and Science in Equal Measure

Building the Programming Language of the Future

Programmers are known for their petty squabbles. Whether it’s tabs vs spaces, 2-space indents vs 4-space indents, or placing brackets at the end of a line or a separate new line, these disputes have carried on so long, and been fought so incessantly, that they have even pervaded popular culture. The one thing all these disagreements have shared in common, though, is that they are petty. Over the course of 2025, and into 2026, though, a new conflict has emerged in the world of programming, and it is anything but petty. It is, many would argue, foundational.

Should we hand over coding to the machines?

I cannot think of another time when a conflict of such consequence has so thoroughly split the world of computing. Certainly not all squabbles among programmers are petty, and past disagreements have had significant consequences for the industry. Probably one of the most famous of these is the one outlined in Eric S. Raymond’s essay “The Cathedral and the Bazaar”. In that work, Raymond sets two working models for open source at odds. “The Cathedral” approach, exemplified by the Gnu project, involves strong personalities dictating or approving of plans and a long-term vision of what an open source project should be, whereas “The Bazaar”, exemplified by Linux, takes a more egalitarian and evolutionary approach, where all comers are welcome to add their piece and the end result is an amalgam of the community’s collective will.

The dispute described by “The Cathedral and the Bazaar” is very real, and remains a matter of significant consequence…to the open source community. If one is honest, while the open source software movement has had an immense impact on the software industry as a whole, it is still a small minority of programmers who participate in the open source community. I have met, in my time as a programmer, more colleagues who have never heard of “The Cathedral and the Bazaar” than those who knew enough to hold a strong opinion one way or the other. So while the issue of how projects are organized has an impact on open source, and therefore transitively has an impact on the software industry, the reality is that most programmers can go to work day in and day out without an opinion on the matter, and this ambivalence is of no personal consequence.

The dispute over AI coding assistants is, in contrast, likely to be career defining for practicing programmer today and into the future.

I don’t want to delve too far into the arguments for or against using AI to write software. Many, many others have written at great length on both sides of this topic. I will attempt to roughly (if not clumsily) summarize the two sides of the dispute. On one hand, some argue that it is irresponsible, dangerous, impractical, or just plain “a bad idea” to let AI write software without a human picking through each line of code confirming that it does what is expected without errors or egregious security vulnerabilities. On the other hand, some argue that the sheer speed with which AI can generate veritable mountains of code and turn concepts into running software is too powerful to ignore and that, rather than reviewing everything AI writes, we need to adapt how we build software to unleash the full potential of this new tool.

Often times, when I encounter a conflict like this, where both sides are dug in, valid points abound, and there is not a clear metric by which the veracity of either argument can rise above the other, I will take a step back. Why does this conflict exist in the first place? I think the root of the problem can be found in a line from the preface of “The Structure and Interpretation of Computer Programs” that I return to often:

First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.

Computers are, at the end of the day, electrical circuits; vast, incredibly complex circuits etched into a substrate at microscopic scale, but circuits nonetheless. They operate on voltage differences and current. There is no language inherent in a computer aside from that which we apply to describe these circuits and how they operate. In other words, a computer does not care if you label a variable foo or supercalifragilisticexpialadocious. To the circuit, it is all the same. The only reason that programming languages exist at all is because, as these circuits grew from the 18,000 vacuum tubes, 1,500 relays, and 6,000 switches of the ENIAC to the more than 100,000,000,000 transistors in a modern processor, we humans needed a way to be able to make sense of it all.

If the argument around AI generated code centers on the fact that AI can generate more code, faster, than we humans can understand it, then the problem isn’t the AI…it’s the code!