The magical leverage of languages
This is an episode of Thoughts on Functional Programming, a podcast by Eric Normand.
If I write a straightforward solution to a problem in Clojure, it might take me a thousand lines of code to solve it. To handle all the corner cases and everything, I got a thousand lines of code. However, if I take this other approach where it's much more indirect, or instead of solving the problem that I have in front of me, I write a language—a DSL. The DSL could take me 500 lines of code to write. That's a fairly large DSL. Usually they're much smaller, but it takes me 500 lines of code. Actually, writing the solution in it only takes 10 lines of code.
Eric Normand: Hello, my name is Eric Normand and these are my thoughts on functional programming. I've been working on a course called Domain Specific Languages in Clojure. It's been making me think about a phenomenon that happens frequently when I'm writing Clojure and doing domain-specific languages in little interpreters and things.
The phenomenon is this, that if I write a straightforward solution to a problem in Clojure, it might take me a thousand lines of code to solve it. To handle all the corner cases and everything, I got a thousand lines of code. However, if I take this other approach where it's much more indirect, or instead of solving the problem that I have in front of me, I write a language.
I make a DSL to allow me to represent that problem and the evaluation of that representation is a solution. The DSL could take me 500 lines of code to write. That's a fairly large DSL. Usually they're much smaller, but it takes me 500 lines of code. Actually, writing the solution in it only takes 10 lines of code.
If you compare them, that's a thousand lines of code if I just do the straightforward, just solve the problem versus 510 lines of code. 500 of those are very easily reusable if I have several problems in the same domain. It's just weird to me that an indirect approach like that can lead to so much code savings. Obviously, it's a very extreme example. It's not atypical.
At some point, and it's very soon, you would actually be better off writing a programming language, at least an interpreter, in assembly, then, using that to implement your Web server. The magic is that it takes you so much time to write this new language, that's represented as a very flat curve, very parallel to the axis, very low to the axis.
You're not getting much productivity out of it because you don't finish the language. At some point, the language becomes complete enough that you can start developing your solution. You start having this upward trend, this upward curve, and you add to the language. It gets more expressive. You just shoot straight up.
Whereas, if you just started writing it by hand directly, you're writing a Web server in assembly. You know you're making progress, but it's mostly linear. Maybe it's a little bit more than linear because you are able to reuse a lot of your code as you get going. It's still not going to go and shoot straight. It's not going to elbow, at least not very soon.
This is mysterious. Where does that upward shooting come from? You have a Turing complete language. You have all of the tools available to write anything you need. Yet to make the language, which is a general purpose tool, to make that and make a solution in it, takes less work, fewer lines of code. It's more expressive. It takes less lines of code than just solving the problem.
It's mysterious to me where that comes from. It has something to do with expressivity. It has something to do with the magic that often happens when you are solving more general problem. It's sometimes easier to solve the general problem than it is to solve the specific problem.
Like the case of garbage collection, a garbage collector is actually pretty simple if you solve it in a general case. If you try to get clever and solve it for a specific case, it becomes a little harder. Might run faster, because you can use all of that knowledge about the specific case, but it's going to be harder to write.
I've been thinking a lot about this mystery of expressivity and general problem solving being easier than the specific, and so much so that you gain this huge productivity, if you can nail that language really well.
One thing that I hear a lot as advice is, let's say you're a game programmer, you want to write a game, the first thing that you might think of is, "Well, I'm going to make a framework and then the games I write will be that much easier to write, because I'll have the framework."
The advice is do not do that. Just write your game, because you will go down the rabbit hole of frameworking. You'll never get a game done. This is probably good advice. I've been down rabbit holes like that, never to return, never having produced a game, except I have all this code that seems like it might be useful one day when I do get around writing the game.
Some people do write frameworks. They are productive in them. I wonder if there just isn't enough study of what makes a good framework and what makes a good...A language and a framework are very similar to each other. I wonder about that, whether there's some principle there that "you should write a framework if" questions.
I know a lot of people say, "Maybe after you've written three games, you can pull out the common stuff." There is that thinking which mostly is there to defer people from writing it for their first game. I don't know if that produces better frameworks.
Maybe by the third time, they realize that they're not going to find a good framework. Maybe what makes a good framework is just very elusive and the same with languages. The reason we have a lot of people experimenting in languages and only a few ever see the light of day, could be that it's just hard and it is going to require a lot of failure.
In that case, you should experiment and try to write a framework first, just accept it, 99 percent of them don't work. [laughs] They don't give you anything that you'll want.
Anyway, I've been thinking about this exponential increase in productivity when using DSLs. One of the things that you really want is to be able to make the DSL cheaply. If you do a compiler in a traditional way where you use Lex and Yacc and compile in those multi passways, that could actually be not beneficial to writing your own compiler.
If you're using a language like Clojure or another LISP, the tools you have available are so easy and you get all the built-in data types. You're 90 percent of the way there, compared to using Lex and Yacc. You do see a huge amount of that leverage.
rpreter and 100 lines of code, then my solution was another 10 lines. That was 110 lines of code as opposed to writing it outright was 600 lines of code. I saved 590 lines of code. You can only do that, because writing the language was so easy in a LISP. It's really nothing more than a function or two to write a language which is nice.
You don't have to worry about parsing, because you already have the reader, literal data structures. You don't have to parse. All you have to do is interpret it. You don't need to have to write a compiler. You can if you want. It'll be faster, but sometimes you don't care about that. That's really cool.
My name is Eric Normand. If you have any insights into where this extra productivity is coming from, why is it that I can write a new language and all of a sudden have an order of magnitude, decrease in my code size, let me know. It just seems so weird that the two...it's more than some of its parts, and I don't know where that's coming from.