The #1 Most Important idea in Computer Science

The idea of the Universal Turing Machine is incredibly important. But does your language support both properties?

Transcript

Eric Normand: What is the number one most important idea in computer science? Hello, my name is Eric Normand. These are my thoughts on functional programming. This isn't exactly a functional programming idea, but I'm a functional programmer and this is the thing I think about.

Somehow, I guess that makes sense. This is my opinion on the most important ideas in computer science. This is the number one important idea which is the idea of the universal Turing machine.

I'm actually reading a book. It's an annotated version of Turing's paper that introduced the Turing machine. It's quite a good book. I recommend it to everyone. I'll have a link. I can't even remember the name. It's like — that's it — The Annotated Turing.

The Annotated Turing. I've just been reading it. I don't look at the cover. It was recommended to me by Schmudde. You should check him out. He's an artist doing some cool work with the history of computing. It's awesome stuff. Link to him, too.

The number one idea is the universal Turing machine. I don't think we appreciate nowadays how significant this idea is and what we do every day. I'm going to break up the universal Turing machine into two parts.

https://twitter.com/ericnormand/status/1074756604350869504

The first part is the idea that you can have a machine that can compute anything that is computable. It's actually the definition of computability in the paper, in Turing's paper. That's part one. Nothing can compute something.

Let me say if the program is computable, it can be computed by a Turing machine. Anything that can compute anything is equivalent to a Turing machine. The Lambda calculus is equivalent to a Turing machine. That's cool.

A lot of people will use this argument of Turing completeness to dismiss discussions about what language is better. Like, "Oh should I use C or Java? Well, it's all just Turing complete anyway. I can do anything in my language that you can do in your language."

It's used as this last resort just to shut down the conversation. There's a second part that's often overlooked, which is that the universal Turing machine is a Turing machine definition that can run other Turing machines. Not only is it complete, but it can actually run itself [laughs] which is mind-blowing.

This part of the universal Turing machine is amazing because what it means is, is that any language can be the implementation language of any other language. You can write a universal program to run, or you can write a program to run any other language. This also has implications like, we're writing software.

https://twitter.com/ericnormand/status/1042488884419018753?ref_src=twsrc%5Etfw

We buy one computer and we can run different software on it. In essence, our computers are universal Turing machines that run software just data on the disk that gets fed to the processor. It can do anything.

We're living in a world of software. Think of the implications. This means that if I buy a computer, I don't have to worry that it's not going to be able to run some piece of software. It can run anything.

It's a computer. It's a universal computer that is running as a universal Turing machine. It can do anything that can be done, obviously, if your machine can run it my machine can run it. That's pretty cool. We don't have to worry about that.

That's like saying, "If your pipe can transfer water, my pipe can transfer the same water." That's awesome. It makes the computer itself a commodity and what's important is the software we run on it.

In 1958, John McCarthy published a paper that introduced LISP. It was meant to just be a notation for talking about computer software. It wasn't ever meant to be a programming language, not at that time. In the paper, he defines the semantics of LISP in terms of LISP.

https://twitter.com/ericnormand/status/1035604028426805249?ref_src=twsrc%5Etfw

What he did was he basically invented the first LISP interpreter by writing it in LISP which didn't run by the way. He had to write it, pretend it hardly existed and then show how to make it work. It's screwy. It's screwing with my head that he could bootstrap it like that.

Of course, it didn't work. Someone had to actually write the interpreter and assembly so that he could run code in LISP. Notice at the start of LISP, it had a proof that it was universal because it could run itself just like a universal Turing machine can run a representation of other Turing machines.

You don't have that in a language like C or Java. No one said, "This is how Java works" by writing an interpreter for it in Java. No one did that for C. What I find is that yes, every language has the first property of the Turing machine which is that it's complete.

https://twitter.com/ericnormand/status/1054447790120689664?ref_src=twsrc%5Etfw

It can calculate anything that's calculable. All of the languages are like that. What they don't have is a second property which is that it's universal or, at least, it's not very easy to make it universal. Could you imagine writing an interpreter for C in C? It'd be pretty hard or an interpreter for Java in Java?

Pretty hard, it's a big language. There's a lot to do, whereas LISP is something, a LISP like the simple List that he defined in 1958. That's something pretty easy these days. It's like a weekend project, I would say.

If you've done it before, you could probably do it in a couple of hours. This is something I think it's special. Just like with the bootstrapping nature of it, the recursive defining in itself in itself, it's hard to wrap my head around what it actually means.

Is it that significant? I think it is. I think that without tapping into this most important idea in computer science that other languages are missing out on something important. That it's not just I can run software written in your language, it's I can define new languages quickly that run software.

I can define a Turing complete language on top of my language in a couple of hours and be running a different language. At the end of the day if we're all Turing complete, if every language is Turing complete, you've got to bring something to the table.

The thing that we call it is expressivity. People talking about how many lines of code it takes to do this? How many lines of code it takes to do that? My friend, Will Byrd, said it best that LISPs are humble because they're the only programming languages that recognize that they don't know the best way to solve every problem.

They give you tools to write new languages or extend the language in. In most languages, we don't take advantage of this. We stay low level. We stay at the level the language gives us. We don't build up a new language that gives us more expressivity to solve the problems.

It's not about whether it takes 10 lines of code to solve some problem in a language. You can always find a language that does that. In fact, you can always find a language, not always, but you can often find a language where there's a library built in to just do the thing you want. It's one line of code.

What's important is to be able to define a new language easily in a small number of lines of code so that you can solve that problem over and over. It's not the 10 lines that matter. It's the hours. It's the month. It's the years of work that it takes to actually build a product.

If you can get a huge leg up, a lot of leverage and expressivity with even a significant amount of code, that will give you a huge advantage when you're coding something important for the business.

You probably disagree with me. Maybe, you agree. I'd love to hear about it. I love talking with my audience. It is the thing that keeps me going on this thing. Please get in touch with me.

I'm Eric Normand on Twitter. Find me at lispcast.com, my website. Along with other episodes, please rate and review me. I love it when other people find out about me. If you think this is valuable, you should help them find it, too. Thank you so much. Bye.