The elephant in the room

I’ve so far avoided writing about The Thing on this blog, mostly because I wasn’t sure whether a post about AI would have me channeling naive optimism or launching into a bitter anti-capitalist diatribe. If you’re on social media, it’s easy to get the impression that software engineers either believe that “manual coding is already obsolete” or that “a single line of LLM output will turn your codebase into unethical and/or subtly broken garbage.” In person, though, I’m hearing much more nuanced takes from my network of industry peers. These folks are trying to approach AI just as they have any new tech throughout their careers—remaining cautious but maintaining the requisite level of curiosity.

I’ll be frank: some days I get mad about how AI has sucked all the air out of the room. It’s become the stopgap for any discussions about the future of software, contributed to a climate of fear and distrust in many tech organizations, made tech hiring even worse, and devalued the lifelong stores of trivia collected by career engineers. But other times, I’m able to breeze through a dreaded chore or dabble in areas where I previously lacked confidence or expertise—all with the help of the same tools. The latter does feel good and, for a moment, makes me forget that, as a customer, I have little agency and no privacy in the face of the companies building them.

By now, I’ve been using various LLM-based tools virtually daily for several years. Though I don’t consider myself an expert in the space and remain skeptical about many of the bigger promises—and apprehensive about the potential for harm—I do find them useful. I may share more about my specific workflow some other time, but in this post I wanted to offer some random observations and “feels,” both positive and negative, as they appear to me through the lens of my decades-long experience building and using various software.

The potential is exciting

At its best, programming with an AI assistant is what my 10-year-old self dreamed computer programming would be like—before being told by a stern informatics teacher to put my dreams of building a killer RPG on hold and learn five hundred memory-efficient ways to rearrange elements in a two-dimensional array. As time went on, I often wondered if, somewhere along the way, the acquired taste for puzzle-solving had overtaken the original desire to build things.

It’s dehumanizing

I’ve heard people rehash this sentiment: “It’s dehumanizing to be told to ask AI for help, to review AI’s solution, to read a letter generated by an AI, etc.” I think I’m personally okay with this—as long as I’m the one writing the prompt. However, I never want to be the first person reviewing AI’s output to someone else’s prompt. At work, I’ve already had the unpleasant experience of code-reviewing AI-generated slop in good faith, only to see the “author” feed my feedback back to their code assistant and produce another iteration of half-baked, subtly wrong slop. I’m also fully in support of any regulation that makes it illegal for AIs to pose as humans.

It seems familiar

I remember when web programming was looked down upon. Back in college, a mean senior made fun of me for investing time into learning the web stack. Having wrapped up his internship at a large consultancy specializing in native enterprise apps for Windows, he confidently proclaimed that “Web is not real programming.” At the time, it was obvious to me that all these types of apps would be on the web within ten years—which turned out to be true. The mean senior is now a friend and runs a consultancy specializing in building web apps for medical facilities. I’d surmise there are already more people using AI tools daily than there are web programmers.

Some problems should not be solved

When a human engineer picks up a task, it’s not uncommon for them to not produce any output for days, weeks… or ever. They may eventually return the task to the backlog. Those are not necessarily bad engineers. Sometimes there’s a mismatch between the task at hand and the person’s strengths. Other times, the whole premise of the task is flawed, and the best solution is to replace it with a different task—or keep procrastinating on it until (maybe) it goes away. There’s something unnatural about AI tools always giving their best effort to every problem they’re faced with.

Hard way is not always the right way

I remember when jQuery was just gaining popularity around 2008. I’d already been a proficient JavaScript programmer, having embraced AJAX and rich client-side interactions. Somehow, I convinced myself that using this hot new library was “cheating,” “bloat,” and “not for real programmers.” Then a new engineer joined my team and rewrote a fragile component using jQuery—fixing multiple browser compatibility issues while removing half the existing code. It would be years before native browser APIs got anywhere near the ergonomics offered by jQuery (if they ever did). If nobody can tell the difference, there’s no special prize for doing things the hard way.

Can software afford to get any worse?

Earlier this year, one of my Mastodon posts went viral (by my standards anyway):

...


I was referring to these two different articles published in The Atlantic a while ago, pre-ChatGPT—back when there was a growing sentiment that software engineers ought to step up their game and take real responsibility for the effects low-quality software has on people’s lives. Software was a mess to begin with, but now organizations are supercharged to churn out code beyond their engineers’ comprehension—something previously only possible by copy/pasting and introducing dependencies. This cannot be good.

Vibe coding is viable

More than a year ago, I “vibe coded” (which wasn’t even a term back then) a macOS app that presents itself as a MIDI-connected “recording light,” but in fact controls my Mac’s webcam so that when I press “Record” in Logic Pro X, a video of me playing the recorded region is saved in a separate folder. For a while, I used it to record guitar playthroughs for my musicians club. It’s the kind of project that would’ve required several nights of researching macOS multimedia APIs and a good refresher on Swift. To paraphrase: it’s the kind of project I’d never complete otherwise. If I end up losing my job to AI, I hope that at least it means programming is solved and everyone can make their computer do anything.

Asserting control

I’ve long accepted that professional programming is not some lofty mathematical discipline but a thing you do by poking around. However, I’m not ready to give up the hope of developing at least a high-level understanding of the systems I work with. I’ll continue to push for a cohesive theory of the code my teams write (or generate) at work. Neither will I stop maintaining my own private “garden” of lightweight tools for personal productivity—such as Anykey, my text-based bookmark manager, clipboard helper, and a custom calendar workflow. There’s still something to be said for the value of small programs that can be fully understood.