Our industry has been infected by a dangerous meme, and it’s one that hasn’t been given its proper scrutiny. Like many memes that explode in popularity, “Worse is Better” gave a name to an underlying fragment of culture or philosophy that had been incubating for some time. I point to C++ as one of the first instances of what would later become “Worse is Better” culture. There had been plenty of programming languages with hacks and warts before C++, but C++ was the first popular language deliberately crippled for pragmatic reasons by a language designer who likely knew better. That is, Stroustrup had the skills and knowledge to create a better language, but he chose to accept as a design requirement retaining full compatibility with C, including all its warts.
There’s nothing inherently wrong with making tradeoffs like C++ did. And since C++ we’ve seen many instances of these sorts of tradeoffs in the software world. Scala is another recent example–a powerful functional language which makes compromises to retain easy interoperability with Java. What I want to deconstruct is the culture that has come along to rationalize these sorts tradeoffs without the need for serious justification. That is, we do not merely calculate in earnest to what extent tradeoffs are necessary or desirable, keeping in mind our goals and values, there is a culture around making such compromises that actively discourages people from even considering more radical, principled approaches. That culture campaigns under the banner “Worse is Better”.
The signs that “Worse is Better” would become a cultural phenomenon—rather than a properly justified philosophy—were everywhere. Stroustrup famously quipped that “there are only two kinds of languages: the ones people complain about and the ones nobody uses”, and his wikiquote page is full of similar sentiments. Unpacking this a bit, I detect an admission that, yes, C++ is filled with hacks and compromises that everyone finds distasteful (“an octopus made by nailing legs onto a dog”), but that’s just the way things are, and the adults have accepted this and moved on to the business of writing software. Put more bluntly: STFU and get to work.
Thus when Richard P. Gabriel published his original essay in 1989 from which “The Rise of Worse is Better” was later extracted and circulated, he was merely giving a name and a slogan to a fragment of culture that was already well on its way to taking over the industry. To give some context, in 1989, Lisp was quite far along in the process of losing out to languages like C++ that were in many technical respects inferior. The story of Lisp’s losing out in adoption to other languages is complex, but the rise of “Worse is Better” as a cultural phenomenon is a piece of the puzzle.
Nowadays, the “Worse is Better” meme gets brought up in just about every tech discussion in which criticisms are leveled against any technology in widespread use or suggestions are made of a better way (among other places, on this blog see CSS is unnecessary). Unpacking “Worse is Better”, I find the following unstated, and unjustified assumption:
As a rule, we should confine ourselves to incremental, evolutionary change. Revolutionary change, and going back to the drawing board, is impractical.
… though part of what makes “Worse is Better” such an effective meme is that it can always be defensively weakened to near-tautological statements (like: “one should consider whether radical changes are justified”). Well, of course. But when “Worse is Better” is brought up, it is typically as an excuse to avoid doing this calculus in earnest about what investments might or might not pay for themselves, or to dismiss out of hand anyone suggesting such a thing. In my post, the mere hint that we might benefit from scrapping CSS, sidestepping it, or starting over in that domain is enough to bring charges of being an “idealist”, “impractical”, and the like, which are considered forms of heresy by the culture. Notice that there is no rational, technical argument being made–that would be an interesting and worthwhile conversation! Like other powerful memes, the underlying assumptions go unsaid, and attempts to bring them to light in discussions can always be deflected by backpedaling to some truism or tautology. The slogan thus remains untarnished and can continue to be propagated in future conversations.
Developing software is a form of investment management. When a company or an individual develops a new feature, inserts a hack, hires too quickly without sufficient onboarding or training, or works on better infrastructure for software development (including new languages, tools, and the like), these are investments or the taking on of debt. Investments in software development bring risk (we don’t know for certain if some tech we create or try to create will pan out), and potential returns. Modern portfolio theory states there is no optimal portfolio, but an efficient frontier of optimal portfolios for each level of investor risk tolerance. In the same way, there is no optimal portfolio of software investment. Questions about whether an investment in new technology is justified therefore cannot be settled at the level of ideology or sloganeering. Actual analysis is needed, but “Worse is Better” pushes us unthinkingly toward the software portfolio consisting entirely of government bonds.
This investment management view also makes it clear that we may choose to invest in things not solely for future return, but because the investment itself has value to us. That is, “Worse is Better” thinking encourages the view that software is always a means to an end, rather than something with attributes we value for their own sake. It doesn’t matter if something is ugly or a hack, the ends justify the means. Unpack this kind of thinking and see how ugly it really is. Do we really as an industry believe that how we write software and what software exists should be fully determined by optimizing the Bottom Line? Other professions, like medicine, the law, and engineering, have values and a professional ethic, where certain things are valued for their own sake. “Worse is Better” pushes us to accepting the idea that software is nothing more than a means to an end, and whatever hacks are needed to “get the job done” (whatever that means exactly) are justified.
But we don’t need to resort to philosophy to justify why we should make greater investment in the software tech we all use daily. This “Worse is Better” culture is a large part of what’s brought us to the current state, where programmers are awash in a sea of accidental complexity caused by the piling on of hack after hack, always to solve short term needs:
OH: "Basically, no one seems to grasp that when stuff that's fundamental is broken, what you get is a combinatorial explosion of bullshit."— Paul Chiusano (@pchiusano) November 15, 2013
Eventually a software project becomes a small amount of useful logic hidden among code that copies data between incompatible JSON libraries— Nat Pryce (@natpryce) August 9, 2014
The first quote is due to Paul Snively. He made it as a rather offhand remark—I love it because it perfectly captures our industry’s predicament (a swamp of accumulated technical debt and accidental complexity) and is suggestive of the inevitable frustration we all feel in having to deal with the direct, day-to-day consequences of these poor inherited decisions.
What I take from these quips is that even without getting philosophical, an earnest appraisal of our actual investment horizons and level of risk tolerance is often enough to justify some level of investment in “risky” new technology that can if successful improve in various ways on the status quo. The portfolio view is again helpful–even if we in general want to play it safe by investing in the software tech equivalent of government bonds, that does not justify making 0% of our portfolio risker investment in new foundational tech. The outcome of everyone solving their own narrow short-term problems and never really revisiting the solutions is the sea of accidental complexity we now operate in, and which we all recognize is a problem.
“Worse is Better”, in other words, asks us to accept a false dichotomy: either we write software that is ugly and full of hacks, or we are childish idealists who try to create software artifacts of beauty and elegance. While there are certainly cases where values may conflict, very often, there is no conflict at all. For instance, functional programming is a beautiful, principled approach to programming which is also simultaneously extremely practical and can be justified entirely on this basis!
This “Worse is Better” notion that only incremental change is possible, desirable, or even on the table for discussion is not only impractical, it makes no sense. Here’s Richard Dawkins from The Selfish Gene talking about the importance of starting over:
The complicated organs of an advanced animal like a human or a woodlouse have evolved by gradual degrees from the simpler organs of ancestors. But the ancestral organs did not literally change themselves into the descendant organs, like swords being beaten into ploughshares. Not only did they not. The point I want to make is that in most cases they could not. There is only a limited amount of change that can be achieved by direct transformation in the ‘swords to ploughshares’ manner. Really radical change can be achieved only by going ‘back to the drawing board’, throwing away the previous design and starting afresh. When engineers go back to the the drawing board and create a new design, they do not necessarily throw away the ideas from the old design. But they don’t literally try to deform the old physical object into the new one. The old object is too weighed down with the clutter of history. Maybe you can beat a sword into a ploughshare, but try ‘beating’ a propeller engine into a jet engine! You can’t do it. You have to discard the propeller engine and go back to the drawing board.
For similar reasons, catastrophic extinction events are believed to have been important for biological evolution, by breaking ecosystems out of equilibrium and opening new niches for further innovation. So it’s ironic that in the tech industry, despite all the talk of “disruption”, this notion of creative destruction is largely absent, and we’ve consigned ourselves to the grooves of local optimum established thirty years ago by various tech choices made when nobody knew any better.
Not many people are aware that Richard Gabriel later distanced himself from his own remarks on “Worse is Better”, by writing a later essay Worse is Better is Worse under the pseudonym Nickieben Bourbaki. The essay is written from the perspective of a fictional friend of Gabriel’s:
In the Spring of 1989, he and I and a few of his friends were chatting about why Lisp technology had not caught on in the mainstream and why C and Unix did. Richard, being who he is, remarked that the reason was that “worse is better.” Clearly from the way he said it, it was a slogan he had just made up. Again, being who he is, he went on to try to justify it. I’ve always told him that his penchant for trying to argue any side of a point would get him in trouble.
A few months later, Europal–the European Conference on the Practical Applications of Lisp–asked him to be the keynote speaker for their March 1990 conference, and he accepted. Since he didn’t have anything better to write about, he tried to see if he could turn his fanciful argument about worse-is-better into a legitimate paper.
I like to imagine that Gabriel, himself a Lisp programmer, was horrified by the cultural monster he’d helped create and the power he gave it by assigning it a name. Realizing what he’d done, the later essay made a vain attempt at putting the genie back in the bottle. But by then it was much too late. “Worse is Better” had become a cultural phenomenon. And over the next twenty five years, we saw the growth of the web, a prevailing culture of “Worse is Better”, and a tendency to solve problems with the most myopic of time horizons.
Discuss it on Twitter: #worseisworse