What do you mean, “software sucks”?
A computer. The one device to rule them all, the ultimate tool for thought, the precursor of singularity. Also, an inevitable source of disappointment for anyone who thinks it’s either of these things. Welcome to software! Biting on her lip, wincing, her leg shaking frantically under the desk, a typical software user declares: “oh my, that’s enough for one day, I better get back to it tomorrow”. She lovingly shuts her laptop’s lid and grabs her phone to post a cheerful note for her online friends about how much she got done in spite of this unfortunate incident, kindly asking for advice. On a worse day though, she might suddenly burst: “well, f#!% you too, you stupid machine!”, her helpless gaze fixed on the indifferently glowing display. Breathe in. Breathe out.
Every computer user has been through this. While the affliction seems to touch some more than others, it spares neither casual users nor superstar programmers. Their individual experiences may vary, but nobody is completely immune. Bemusedly, it’s the people who are responsible for the state of software, who seem to be displeased with its state the most. As Peter Welch writes in his cult essay “Programming Sucks:”
The only reason coders’ computers work better than non-coders’ computers is coders know computers are schizophrenic little children with auto-immune diseases and we don’t beat them when they’re bad.
Having caught myself ranting about the state of software one too many times, I wanted to stop and reflect on several important reasons why it appears so broken, and why anyone who thinks they know how to fix it are probably fooling themselves. And why we should keep trying anyway. Maybe.
There is no shortage of accounts of software dread. Moreover, some tech figures are so dedicated to the task of documenting every possible way in which software sucks, their entire public personas are built on this fertile ground. Ranting can be fun, but for the purpose of this article, I’ll make an effort to focus on boiling down the reasons we might feel this way about our own trade. Let’s start with the top three: bugs, complexity, and competence.
The most commonly heard complaint about software is “it’s way too buggy.” For example, Dan Luu, a software engineer well-known for his in-depth write-ups on programming and hardware, once set on a task of documenting every single bug that he encountered during his one week of work. The length of his resulting list won’t surprise anyone who ever tried to pay attention to the programs they interact with, but the fact that an acclaimed expert is as conscious of these imperfections as anyone else remains notable. Dan concludes that it’s important to spend more time testing our code and adopt more advanced automated testing techniques.
The second kind of accusation is what we might call “the Babylonian threat”. We’ve piled up too many rough pieces and the resulting structure is about to tip off. The malcontents usually proclaim that we must deconstruct this dilapidated edifice or face the imminent collapse. From the more marginal efforts like suckless.org (reducing complexity by rewriting everything in fewer lines of C) to things like WebAssembly or Clojure’s Rich Hickey preaching the “decomplect” mantra, quite a few people are trying to cut the abstractions count these days. Time will tell how much we can fix, but it’s worth noting that the Babylonian threat also has deep security implications best exemplified by the recent Meltdown and Spectre vulnerabilities.
The third one is the programmers’ favorite: “people are idiots ignorant.” This one seems to be shared by many prominent figures in tech from Linux’s own Linus Torvalds to indie game developer Jonathan Blow, who insists that Adobe and Twitter can’t write software. The belief here is that “software sucks because most developers don’t understand how to build good programs”. The proposed fix seems to be publicly shaming people into doing a better job (Linus) or bestowing the almost perfect, video game-shaped computational artifacts onto the unfaithful (Jonathan).
Jokes aside, anyone can benefit from more knowledge, but there’s also always more to learn. This same accusation is often used against software users as well, to point out that the laymen don’t appreciate the beauty of Emacs, UNIX, <insert a sophisticated tech with a steep learning curve of your choosing>, etc. Software designers also occasionally get blamed for coming up with “dumb GUIs” and generally ignoring the preferences of advanced computer users.
While these are all valid concerns, I feel that my deepest moments of frustration with the machine stem from a different place. Let’s take a detour and go back to the early vision of a computer as a “thought amplifier”. Think, Doug Engelbart or Joseph Licklider. As everybody knows, amplifying noise only produces more noise, so cranking up the hectic human conscience is clearly not a great idea. This leaves us with a necessity to adopt a “thought framework” that a user can fit in her head and employ to benefit from the computer’s powers. Due to the limited capacity of human brain, we prefer to reason about the world using as few categories as possible. Thus, “Everything is a …” philosophy was born.
UNIX told us that everything: your notes, your network connection, even what you see on the screen is a file and therefore can be read from or written to. Later, the emerging graphical user interfaces postulated that everything is a window, and you interact with windows by pointing to, clicking, selecting, and dragging and dropping objects on the screen. Finally, the Internet brought the notion of web pages: everything is a web page, and you can get from one web page to another by following a hyperlink.
The UNIX vision was for each program to do one thing and do it well, and the OS provided a plethora of simple tools which the user could compose to perform more complicated operations. On the one hand, the GUI significantly limited the user’s ability to combine programs: you couldn’t just use a clone brush from Photoshop when editing an image in an MS Word document or sort files in a folder by their MD5 hash at a whim. On the other hand, the GUI facilitated natural discovery and learning without having to read complex manuals. It also introduced more users to applications: opinionated sets of tools for working on a particular task.
Almost surprisingly, the UNIX descendants were able to drop or distort many tenets of the original UNIX philosophy: if you open a manual for one of the most common terminal utilities “ls” — used to list directory contents — you’ll find that its option flags utilize every single letter of the English alphabet with the exception of “j”, “y”, and “z”, some used in both uppercase and lowercase variants. These days “ls” can sort files, colorize the output, it can do things you’d never guessed it can do. As it turned out, it wasn’t as easy to compose simple tools operating on plain text and tabular data.
The web was initially envisioned as a global digital library, but soon developed into the world’s largest computing platform introducing us to web apps. Despite the many benefits, web apps stand out as particularly confusing. Under the disguise of a web page they might share certain properties of desktop apps: sometimes you are allowed to drag and drop things on the page, sometimes the right click will reveal an unexpected context menu, and on a particularly rare occasion, dragging files into the browser window won’t cause the browser to open that file. A modern web app is neither an app nor a document, and it’s still unclear whether these flaws can be fixed without giving up on the document nature of the web.
To summarize, we ended up with a number of distinct layers each offering some level of internal consistency that quickly degrades at the cross-layer boundary. Note that I’m only using terminal, GUI, and web as familiar examples. If you’re a programmer, every programming language, every framework, and every IDE is also its own disconnected bubble. You can be an expert in debugging C programs with gdb, but still feel completely disarmed faced with a stuck JVM process or even your own C program running on Windows. The contemporary software “sucks” because it severely limits our ability to reuse knowledge in different contexts.
Whether we’re talking about the Linux shell, Lisp programming, or Emacs, once you become productive within an environment, you want to spend more and more time with it and minimize your contact with the idiosyncrasies of other platforms. While the “pick the right tool for the job” motto remains popular in the community, it co-exists with the efforts to enable Ruby programmers to write GUI programs for macOS, ship native code to the browsers, and make everything run JavaScript. And it’s hard to oppose this trend, as it enables more people to tackle various problems without leaving their zone of comfort.
It also means that, given enough time, every software environment is doomed to grow all the standard parts expected from its category. For a programming language it will be a build tool, a debugger, a web framework, a GUI toolkit, a compiler to JavaScript — in an arbitrary order. Writing this, I picture a gray-haired old-timer who winces reading another announcement of a modern layout system for CSS, or a “revolutionary” file browser for iOS, or a rudimentary 3D engine running in a web browser. Everything easy is hard again. We might be doomed to re-inventing the computer inside a computer inside a computer and having to learn to use every new computer over and over again. Some say it gets easier after a while.
I remember being disturbed by Gerry Sussman’s reply when he was asked why MIT stopped teaching the renowned 6.001 course based on the “Structure and Interpretation of Computer Programs” textbook. In the article titled “Programming by poking: Why MIT stopped teaching SICP,” Yarden Katz writes:
Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy). The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?’”. The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.
The idea of programming shifting from expressing your understanding of a problem in code to a gruesome task of making sense of incomprehensible digital phenomena concerned me deeply, it still does. Nevertheless, I remain hopeful. There is a limit to how much complexity a human brain can handle. The same market forces that allegedly push the industry to omit testing, stack “leaky” abstractions, and be ignorant to accessibility, performance, or even user’s freedom of expression, will have to give in once the programmer’s productivity becomes at stake or the hardware manufacturers are no longer able to deliver improvements at the same impressive rate. Some even boldly suggest that AI will take over, and human programming will be relegated to an odd hobby (I’d probably still program though!) In either case, better not take our current habits too seriously. We might be just a bunch of kids playing in a sandbox, waiting for the computer revolution to happen.