Disclaimer: This post was written May 2025, and the arguments apply to AI code capabilities at this time. The arguments around lack of competence are certainly likely to become less prevalent-while the parts about the desecration of the joys of programming, and fundamental human understanding of programming-are likely to become
As if anyone cared if they had to wait a total of 3 seconds in a workday. If it’s a second per user action, we’re talking, but this is some bare-metal CPU wrangler’s take on how ‘efficient’ code should behave; completely disregarding that most users who touch a computer need 5 seconds to type ‘hi’ into MS Teams.
It’s interesting that everybody else preaches ‘Write for the human first, for the machine second’.
That depends on when it appears. Some tasks kind of have to feel instantaneous, and there might be a pretty slim margin between okay and frustrating.
But yeah, that’s the kind of savings that mostly matter on the scale of regional or national grid planning.
Yeah, the author seems to lean too hard into the “programming is electronics” model, where the opposing end is “programming is math and formal logic”; most of us take some mixed view. And most of us have higher correctness requirements than what a reasonable effort in memory unsafe languages like C and C++ gives us, so we trade away some machine efficiency. In the authors parlance, most of us aren’t interested in the demoscene circlejerk; we need to make tradeoffs between maintainability and everything else. Write-once code isn’t good enough.
There have been attempts at establishing a third pole of “promptgramming is natural language” or whatever ever since COBOL promised programming in plain English, but the ambiguity of natural language when used to encode a business logic machine means that a “sufficiently advanced compiler” will have to be extremely advanced, on the order of including the manager and the entire engineering methodology.