The Tragedy of the Common Lisp:

Why Large Languages Explode

Mark S. Miller
5 min readMay 27, 2019

Adapted from a 2015 es-discuss thread. “Common Lisp” is not the topic. It serves only as one of many illustrative counter-examples.

I have been on the JavaScript standards committee (TC39) since 2007. On TC39, we appreciate the value of language simplicity. But over time, we have lost our vigilance against encroaching complexity. We must better understand how that happens naturally, what the costs are if left unchecked, and what to do about it. This essay is addressed not just to TC39, but to all those who wish to influence the trajectory of the JavaScript standard or any standard facing similar pressures. Learn from our mistakes!

The Algol, Smalltalk, Pascal, and early Scheme languages were prized for being small and beautiful. The early C and JavaScript languages were justifiably criticized for many things, and rarely mistaken for beautiful. But they were small, and this aspect was properly and widely appreciated. When a language is small, our appreciation of it is often driven by the sense “I can learn the whole thing, and then I will have a mastery of it”, and later “I know the whole thing. I love the fact that there are no corners I don’t know.” For C and JavaScript, few who thought they knew the whole thing actually did — the details were actually fiendishly complex. Nevertheless, this sense drove much of the satisfaction with everyday usage.

The esthetic of smallness of JavaScript lasted through the EcmaScript-5 standard. I participated heavily in both EcmaScript-5 and EcmaScript-2015 and in both cases I am proud of my contributions. EcmaScript-2015 is much larger, but is nevertheless a better language. Given where we started, we could not have achieved these gains in the utility of JavaScript without such an increase in size. I do not regret most of the additions that grew EcmaScript-5 to EcmaScript-2015. For many of these, had we the EcmaScript-2015 standards process to do over again, I would likely make similar additions.

Each of the additions that grew EcmaScript-5 to EcmaScript-2015 had to pass a very high bar. Psychologically, this made sense to all of us because we were starting from a language, EcmaScript-5, whose smallness we could still appreciate. When a language is small, every additional feature is viscerally felt as a significant percentage increase in the size of the language. The specific benefits of a feature are always visible to its advocates. For a small language, a new feature’s general costs in added complexity are also still visible to everyone.

The future of JavaScript?

Once a language gets beyond a certain complexity — say LaTeX, Common Lisp, C++, PL/1, modern Java — the experience of programming in it is more like carving out a subset of features for one’s personal use out of what seems like an infinite sea of features, most of which we become resigned to never learning. Once a language feels infinite, the specific benefits of a new feature are still apparent. But the general costs in added complexity are no longer apparent. They are no longer felt by those discussing the new feature. Infinity + 1 === Infinity. Even aLargeNumber + 1 === approximatelyAsLargeANumber. This is the death of a thousand cuts that causes these monstrosities to grow without bound.

So please, I beg everyone influencing the language, when considering a new feature, please apply a higher bar than “Wouldn’t it be nice if we could also write it this way?”. I believe that EcmaScript-2015 is in that middle territory where unrestrained growth is not yet inevitable, but only if we all restrain each other with high standards for any proposed new feature. As a community, we need more of a shared sense of panic about the size that EcmaScript-2015 has already grown to. Ideally, our panic should increase, not decrease, with further growth from here as our size approaches the point of no return.

Some distinctions

Non-uniform pressure to stay small

The urgency of minimalism gets weaker as we move from core language to standardizing libraries. The overall standard language can be seen as consisting of these major parts:

  • Fundamental syntax — the special forms that cannot faithfully be explained by local expansion to other syntax.
  • Semantic state — the state that computation manipulates.
  • Kernel builtins — the portion of the built in library providing functionality that, if it were absent, could not be provided instead by user code.
  • Non-kernel intrinsics — built in libraries that could be implemented in JavaScript, but that semantic state or kernel builtins depend on. For example, with proxies, one might be able to implement Array in user code. But other kernel builtins already have a dependency on Array specifically, giving it a privileged position over any replacement.
  • Syntactic sugar — the syntax that can be explained by local expansion to fundamental syntax.
  • Global convenience libraries — could be implemented by unprivileged user code, but given standard global naming paths in the primordial global namespace.
  • Standard convenience modules — blessing community-developed modules as standard.

I have listed these in order, according to my sense of the costs of growth and the urgency for minimalism. For all of these we still need to exercise discipline. Only for the last one that we should consider growth of absolute size to be unbounded; restricting only the rate of growth as we wait for candidates to prove themselves first by the de facto process of community adoption. Ideally, TC39 should stop being the bottleneck on the last bullet anyway, as external de facto and de jure processes should be perfectly capable of independently arguing about and evolving standard convenience modules.

--

--

Responses (10)