Clojure vs. The Static Typing World
Summary: Rich Hickey explained the design choices behind Clojure and made many statements about static typing along the way. I share an interesting perspective and some stories from my time as a Haskell programmer. I conclude with a design challenge for the statically typed world.
Rich Hickey's Keynote at Clojure/conj 2017 has stirred up the embers of some old flame wars, particularly the static vs. dynamic typing debate. I happen to have an interesting perspective, having worked professionally in both Clojure and Haskell. Some people on each side of the debate seem to be confused about what he was getting at, so I thought I'd share my interpretation of what he meant.^1 I want to shed some light on the spirit of his argument, using some of my experiences.
Clojure was designed to make a certain kind of software easier to write. That kind of software can be characterized as:
- solving a real-world problem => must use non-elegant models
- running all the time => must deal with state and time
- interacting with the world => must have effects and be affected
- everything is changing => must change in ways you can't predict
Everything Rich talks about in his presentation is within this design context. I think that is lost in some of the discussions I've seen online, so I'm just highlighting it here.
He's never really talking in the context of dynamic typing vs. static typing. It's always Clojure vs. type systems available to him (though he never explicitly said that, it's easily understood from his language). This is just a guess, but at the time he wrote Clojure, those type systems most likely were Java-style (Java, C++, C#) and Haskell. It's not an attack on the theory of type systems or what they are capable of in the general sense. It is a pragmatic argument about whether any existing systems fit his design requirements. Although his language does not perfectly precisely address it, the spirit of his comments are answers to the question "Why invent Clojure when you could just use Haskell or Java?"
But more on this later.
JSON and ADTs
When I was working on Haskell-based web software, we dealt with a lot of JSON, as one does. Haskell had a really neat way of representing JSON as an Abstract Data Type (ADT). It's nice because JSON is well-specified as a recursive type.
Any JSON value could be represented using that ADT. It was totally well-typed. Except that, even though you knew the type, you still knew nothing about the structure of that JSON. The type system couldn't help you there.
What we wound up doing was nesting pattern matching to get at the value we wanted. The deeper the JSON nesting, the deeper our pattern matching. We wrote functions to wrap this up and make it easier. But in general, it was a slog. The tools Haskell gives you are great at some things, but dealing with arbitrarily nested ADTs is not one of them.
Some people say that this is the cost of having well-typed code elsewhere---dealing with the untyped stuff is terrible. But imagine if more and more of your code is dealing with JSON. It would be nice if there were some easier way.
Types, positional semantics, and coupling
If you have a procedure with 10 parameters, you probably missed some.
--- Alan Perlis
Rich had some good points about positional semantics. When defining an ADT, positional semantics make it hard to read the code. If you've got seven arguments to your ADT's constructor, you've probably got them in the wrong order and you've forgotten one.
So the answer is "don't use positional constructors". Haskell does have constructors with named arguments. And that's fine, except in a real system, you start with a simple ADT that has two arguments---very easy to keep track of. You use it all over the place, using pattern matching. Its structure is now coupled all over the code.
Then, you need to add one more piece of information. You add a third argument. Then you follow down all of the compiler errors, adding that third argument to your pattern matching statements. Great! Everything is good now.
Except, over time, you add a fourth and a fifth. Pretty soon, you've got seven or eight of them. Good thing you have a type system to keep track of all of that. And don't get me started about merge conflicts when two branches want to modify the ADT at the same time. That's a digression.
The fact is, in real systems, these ADTs do accrete new arguments. We
had a central type called
Document that had a bunch. I don't remember
how many now. It has been a few years. Seven? Ten? It was a lot. And I
remembered every single one of them because we processed documents, so
most code was about
You may catch them early and give them names. But in some cases, you don't. So you file a ticket to change from positional to named, and then to rewrite pattern match expressions in almost every file. And when you prioritize that backlog, that task is probably not as important as adding a new feature. So it stays.
My recommendation to those of you writing Haskell style guides is to disallow positional constructors with more than three arguments. Put a linter on it.
Rich mentioned that
Maybe wasn't a good solution to lack of knowledge.
He said "You either have it or you don't." This one is the most baffling
to me. It's baffling because it seems very right to me but I can't say
why. I also don't have any stories handy to explain my experiences with
Maybe. Unfortunately, he doesn't go very deeply into it.
*UPDATE: He goes super deep into this in a more recent talk called *Maybe Not. The stuff I mention here is just my opinion and really missed what Rich was trying to say.
As some people online mentioned have mentioned,
Maybe String means
precisely "I may (or may not) have a String". But that's missing the
point. Let me try to explain why.
Maybe is a very neat idea. It's a type that represents having a value
or not. It's often used as a replacement for nullable fields. In a world
where types, classes, or fixed records are used to hold information
about an entity, using a
for the optional fields is a way to represent that optionality.
In the kinds of systems Rich is talking about, any data could possibly
be missing---if not now then at some point in the future when
requirements change. At the limit, you would need to make everything
Maybe. And if that's the case, I would propose that maybe, for this
context, you want to move that optionality into the information model,
not the domain model. Your
Person type is part of the domain model (it
says: these are the bits of information we care about for people). The
fact that any bit of information could be missing is part of the
information model. The information model might contain other data, like
when the information entered the system.
To put it another way,
Maybe lets you represent two possible states:
having the value and not having the value. That may be exactly what you
need for your domain, in which case, use it. However, there are many
domains that need three cases. For those, the Maybe won't work.
Imagine a simple form you ask people to fill out to get their contact
information. There are three cases: 1) I got their email address, 2) I
asked for the email address but they didn't fill it out, and 3) I didn't
ask for the email address.
Maybe String collapses the last two into
the same value (
Nothing). But that loses some information. It's not
perfect, but the hashmap with
nil does have all three of those cases.
String for having an email address, the
value for "asked but unfilled", and the
I'm reminded of my friend's company. They do medical record software. The standards fo r medical records have hundreds of types. And each version that comes out changes things slightly. On top of that, you're getting data from systems that don't implement things properly. And on top of that, humans are entering the data incorrectly, which isn't surprising since the spec is hundreds of pages long.
In a system like that, you can't write correct types for every kind of
entity. You'd never finish before the new spec came out. Instead, you
need to think at an information level. One approach is to do your best
with the information you need to work with and pass along the rest
as-is. Regardless of how you decide to handle it, sprinkling
around isn't going to cut it. Nor are fixed entity types. You really
need to take it up a level to build a robust information model.
Types as concretions
Rich talked about types, such as Java classes and Haskell ADTs, as concretions, not abstractions. I very much agree with him on this point, so much so that I didn't know it wasn't common sense.
But, on further consideration, I guess I'm not that surprised. People
often talk about a
Person class representing a person. But it
doesn't. It represents information about a person. A
with certain fields of given types, is a concrete choice about what
information you want to keep out of all of the possible choices of what
information to track about a person. An abstraction would ignore the
particulars and let you store any information about a person. And while
you're at it, it might as well let you store information about anything.
There's something deeper there, which is about having a higher-order
notion of data.
This isn't a sleight on types or classes. It's more a comment on people's use of language. Abstraction is a term used without enough thought.
A focus on composable information constructs
Sure, we get very few guarantees about the data we have in Clojure. We
get that. But if we assume we've got some basic things right, like that
we have a map when we think we have a map, we do get some nice
guarantees. For instance,
associng the same key and value is
idempotent. That we can access the value for any key in constant time.
And Haskell has that for ADTs. But can Haskell merge two ADTs together as an associative operation, like we can with maps? Can Haskell select a subset of the keys? Can Haskell iterate through the key-value pairs?
Of course, you could build a type with all of those properties and more in Haskell. A universal information model, with a convenient library of functions for dealing with them. That universal information model would look something like the JSON datatype or a richer one like an EDN data type. And your library of functions for dealing with them would look something like Clojure's standard library. But just like with the JSON type, you'd have very few static guarantees. At some point you're just re-implementing Clojure.
Types are a design space
Okay, now for some pontification. A type system gives you a new design space. It lets you express things, like type constraints, that you simply don't have to in untyped languages. That gives you a lot more choices to make. Choices can be good. But they can also be a distraction.
Rich Hickey mentioned puzzles as being addictive, implying that it's fun to do stuff in the type system because it's like a puzzle. It's similar to the Object-Oriented practice of really puzzling out those isA relationships. It very much is like a puzzle: you've got some rules and an objective. Can you figure out a solution? Meanwhile, it gets you no closer to the goal of writing working software.
I've definitely experienced this myself in Haskell, both as the puzzler and an onlooker. This may have changed in the years since I've done Haskell professionally, but I often found Haskell libraries to be puzzle solutions. Someone wanted to figure out how to type some protocol to "make it safe". They usually succeeded at getting a partial implementation before giving up. On many occasions, after looking through several attempted implementations of a simple protocol, no existing libraries fit our bill.
Like I said, this may have changed. But I'm confident in asserting that there's a danger with getting carried away with the complexity of your types. Haskell gives you plenty of rope to do that.
Type systems are a design space
The language's type system itself is a design space. In addition to the syntax and runtime semantics of the language, typed languages add on an additional space for exploration. This inherently makes them harder to design. They simply have more choices to make. As evidence that there are many difficult choices to make in the type system, look at guides such as An opinionated guide to Haskell in 2018 and Guide to GHC Exctensions. Again, the type system's choices can give you great power. It's simply that there is more you have to design.
I would say that Clojure and Haskell are both well-designed languages---neck-and-neck, even. And now, thanks to the talk, we know what Clojure was designed for. What was Haskell designed for?
What Haskell was designed for
I don't think it's any secret that Haskell was designed as a lingua franca for functional programming research and type theory research. It famously "avoids success at all costs" by sticking to its principles of purity and typeability.
Of course, many people and companies are successful using Haskell. People have learned to wield the tremendous power of the type system. I think that points to something inherently valuable about Haskell and the other typed languages. Lots of parts of a system are inherently typeable---meaning they do have small, clear, and stable types. Those parts aren't as chaotic as the outside world. They can be well understood and codified into precise types to great advantage. And there's no reason the information model couldn't be either. Of course it's possible. But has it been done?
It's easy to hear critiques of static typing and interpret them as
"static typing is bad". Rich Hickey was certainly
not doing that. He had a particular problem to solve and was noting how
static typing was not particularly well-suited to that problem.
doesn't solve the problem.
Person and other entity types don't solve
the problem. They are useful for some things, but not for this
I see the gist of his talk not as a condemnation of static typing, but instead as a design challenge to Haskellers and language designers. Many people online brought up Haskell extensions or row polymorphism, or some feature of Purescript, etc. These are all great. The challenge is to piece together an actual solution to the problem of "situated programs", not just point to features that might address one issue. Perhaps you can do it in pure Haskell. Maybe you need some extensions. Maybe it's an entirely new language. Or maybe you dismiss the challenge as unimportant---that his description of "situated programs" is not the right way to look at things.
But I don't see people doing any of those things. What people are doing
is saying "but of course that's possible in principle, so the argument
against static typing is invalid". It's not about what's possible, it's
about the particular designs of the actual languages we have to choose
from. He criticized particular features used for particular purposes.
Maybe is not it. Pattern matching is not it. Entity types are not it.
What is the static typing solution to the entire problem?
When I was working on the Haskell system, I really missed having a more flexible model of information. So much of our code was about taking form fields and making sense of them. That usually involved trying to fit them into a type so they could be processed by the system. Like Rich mentioned, the information-processing part dominated the codebase.
When I moved into a Clojure job, I felt such a sense of freedom. I could treat information as information and not have to prove anything to a type checker. However, I had the benefit of having traversed the gauntlet of a strict static type system. My mind had been seeded with the thought-pathways of type correctness. My Clojure code benefited and I still look in horror on some Clojure code that plays too fast and loose with types.
I sometimes wish I could tighten down some Clojure code with static type guarantees. These are in small, remote portions of the code that have intricate algorithms. Static types would be very useful there. But that's the exception. In general, I like the flexible information model approach of Clojure better than the statically-typed approach of Haskell.
Talking about this stuff is a minefield. The grooves of the static-vs-dynamic debate have been etched too deeply in our minds. We can no longer hear simple statements and interpret them without the baggage of 50 years of entrenched fighting. That's too many mixed metaphors already, so I'll stop there. I'll just say this: I don't think either "side" is going to win any time soon. Both static and dynamic languages are going to play important roles in the future. We need to look past the distinction to find a better way to program.
Elm's design seems superbly crafted for its purpose of building rich, interactive web interfaces. It is small and focused. Is it possible with our current understanding to build a statically-typed language that rivals Clojure for Clojure's intended purpose? What would such a language look like? What types would it have? Who will build it?
- I've watched the talk a couple of times to understand as much as I can, and I've looked at some online discussions of the talk to see where people were confused. But these are my interpretations of the ideas he presented. I can't speak for Rich Hickey or anyone else in the Clojure community.