Lisp: A language for stratified design
This is an episode of The Eric Normand Podcast.
Subscribe: RSSApple PodcastsGoogle PlayOvercast
In this first episode of season 3, we analyze a great paper called Lisp: A language for stratified design. The paper asks the question "What is Lisp good for?"
Transcript
Eric Normand: "People who first learn about Lisp often want to know for what particular programming problems Lisp is the right language. The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm as in the rule language, or it might be a collection of procedures that implement new primitives, means of combination, and means of abstraction embedded within Lisp, as in the Henderson drawing language."
My name is Eric Normand. We are talking about a paper today called Lisp: a language for stratified design by Harold Ableson and Gerald J. Sussman. Now, these are the authors of Structure and Interpretation of Computer Programs, and, as far as I can tell, they take two examples from within the book and extract them out in a research paper/article that they published. And this is dated August, 1987.
That paragraph was from the conclusion, it's like almost the last thing they say. And the reason I did that section is because I think that's the, every research paper has sort of like the main question that it's trying to answer. And I think that's the answer. And the question is, what is Lisp good for?
Why do people like Lisp? Now we have to go back to 1987, you know, for context, historical context, there aren't a lot of functional programming languages and Scheme was very young at that time. These people worked on the original Scheme, the authors of this paper. And so it's important to put ourselves back there when people talk about Lisp, a lot of what they're talking about is functional programming, what we would call functional programming today. That is first-class functions, a lot of interesting data types that you can manipulate recursively, things like that, that has really differed from what other languages gave you. Okay, so I'm going to be reading excerpts from this paper.
I really liked this paper and I've read it, I don't know, four or five times now. I made notes and so I'm going to read some sections and talk about them. Okay. Just as an analysis of the paper. Like I said, August, 1987 so we're going to have to go back in time a little bit here. Okay. So the first part I want to read is in the abstract.
"Lisp as a language for expressing the design and organization of computational systems."
Okay. So this is, you know, if you go back to the context in which this was written, people weren't talking about programming languages as a means for expressing a design or an organization. It was a way to get the computer to do what you needed it to do. Okay. So they're kind of establishing what I think today is a lot more well-understood and well-accepted that there are different readers, you have to write the program for your machine, for your compilers so that it can run on the machine. But you're also writing it for human readers, for yourself at a later date. Because you're going to be reading this later, working on a team of people building the software. And so they're trying to build out this idea that we kind of take for granted today.
So that's a big theme when you're reading these old papers that sometimes you stumble upon a paper where they're saying something totally obvious to someone from 2019, which is when I'm--- 2020, when I'm recording this. But then you realize, Oh no, back then it must have been different. This must have been something you had to say as context for why you're writing the paper.
So I think that that's very important in this paper. For instance, he said, or they say "a programming language is a medium for communicating methods, not just a means for getting a computer to perform. Operations programs are written for people to read as much as they are written for machines to execute."
Okay. This is, this takes a very prominent place in the paper. It's in the first paragraph that they're trying to establish this idea that I think is pretty well understood today. You're not just trying to write the most efficient program for your computer to run. You need to basically live in this software as an employee of the company, you're going to be dealing with the software all the time, modifying it, trying to debug it. You need it to be for people as well.
"This article exhibits programs that illustrate the power of Lisp as a language for expressing the design and organization of computational systems."
Okay, so it's answering that question. What is Lisp for? Right. It's more than---according to this paper, from what they're trying to say---is that it does more than other languages do.
It's particularly good at something. There's a reason they like Lisp. They wrote scheme, right? They must have chosen a Lisp for a reason. So they're trying to explain themselves.
Okay. And now they're talking about the paper.
"The examples are chosen to highlight the importance of abstraction in program design and to draw attention to the use of procedures to express abstractions. Any programming language provides primitive components, means by which these can be combined, and means by which patterns and combinations can be named and manipulated as if they were primitive."
So often when we're reading, when we're researching, we're looking for. The Truth. And that's dangerous because the truth is elusive. And often there are multiple truths and it's much better to find perspectives, different ways of looking at some phenomenon. And so I'm going to invite you at this time to not look at like, is it true that lisp is better for something?
Or, what is a programming language and what does it provide? I want The Truth, and rather let this paper be a doorway. Enter into this world. You can always leave. Okay. It's totally safe, but they're establishing some definitions here. This happens to be the same definition that they give at the beginning of Structure and Interpretation of Computer Programs.
Another reason why this is a great paper: it's short. It boils down a really great idea. But let's go over those things again, a program, "any programming language provides primitive components, means by which these can be combined and means by which patterns of combination can be named and manipulated as if they were primitive."
Okay, and this last one, this. Patterns can be combined and named and manipulated. This is what they mean by abstraction. Okay? There, there are different definitions of abstraction and sometimes we even use it without really knowing what we mean by it, but they're defining it right here. So you write a new function, you give it a name, and now that function, because it's first class, it can be combined in other ways using other primitives that exist.
Okay. So I want to stick to this for a little bit. So primitive components. This is the stuff that comes with the language, okay? These are your built in data types. Your built in functions. Then you have means by which these can be combined. Okay. So in a Lisp, that would be, make a new function or making a compound data type if you've got types in your language. In something like Java, it would be you can combine these classes, these objects into a new class.
And then means by which those patterns of combinations---so the things that you created and combined---you can give them a name and manipulate them as if they were primitive.
So you're back in the cycle. You've built all these things. Now, you can act like they're primitive. You could act like they came with the language. But you wrote them. They're part of your software. Okay? So this is opening this idea that. There's a method to our madness, right, that we can, by having the cycle where I can take something, build something new out of it, and then treat that something as if it was originally with the language.
Okay. A lot of languages can't do that. In a lot of languages, you could make a subroutine. But that subroutine is different from the keyword that came with the language. Right. Or you could make a data type, but it's never going to be like the data types that come with the language.
Some languages are like that. Probably at this time, more languages were like that. Okay. We've learned a lot since then.
Okay, so I'm going to continue.
"With appropriate abstractions to separate the specification of components from the details of their implementation. We can provide a library of standard components that can be freely interconnected, allowing great flexibility in design."
Okay. So there's a lot of words in here that we need to go into, and they're going to touch on this topic.
This is very early. They're still introducing what they're going to talk about. And I've gone through this sentence many times trying to really understand it. I mean, it sounds good, but I wanted to understand. These papers are written very tightly. I've written research papers before and you write them and you rework phrasing. You have a space limit.
So you're trying to condense stuff and you don't want to miss this important idea. So you, you really tighten up your language so that you express everything. So I trust that they've done that and that this sentence, very dense sentence, contains a lot of information. Okay. So
"with appropriate abstractions"
which means you have to do some work. You have to find the abstractions. Can't just use any old abstractions. You have to do this on purpose. You're trying to find these abstractions, which remember means you build a thing and you name it. Now you can use that thing as if it were primitive.
"To separate the specification of components from the details of their implementation."
Now, you don't know how many times I poured over this phrasing, and re-read the article with this idea in mind. This is what I think it means. I don't think it's clear. It's a guess. Okay. It's 80% 90% sure. The specification of components. I think what they are talking about in an, in an abstract, conceptual way is the function signature. Right? Well, we would think of, in a typed language, the type of the function, what arguments does it take, what are their types, and what does it return. Okay. And then maybe the documentation. Like what is it supposed to do with that? Right? Cause it's, you can't just go just by the method, by the function signature.
You have to go also by what's it supposed to do. But that's all the spec, the specification. So you're separating that from the details of the implementation and the implementation is the body of the function. Okay. How does it work? What does it do mechanically? What is it doing. Okay.
"We can provide a library of standard components that can be freely interconnected."
Standard. It's an interesting idea, right? The standard components. I think what they mean is general components or components that you would look at them and say, yes, I can reuse that. That's a standard. That's the standard meaning of that idea. A good implementation. So you can provide a library of standard components that can be freely interconnected, allowing great flexibility in design.
So that is the sentence. This incredibly dense sentence is introducing this method that they are about to explain. Okay. Now again, the title is Lisp: A language for stratified design. So they're explaining the stratified design method. Okay. All right.
"A language for design should not unnecessarily limit our ability to make abstractions."
What do they mean by limit? Limit our ability.
"Most traditional programming languages, however, place arbitrary restrictions on procedural abstractions."
They're saying that there are things that we should be able to do, that our languages don't let us to.
"Three common restrictions are 1) requiring that a procedure be named and then referred to by name rather than stating its definition at the point of reference."
Alright, so that already is there. They want anonymous functions. Many languages don't have anonymous functions and you have to define them with a name. And then refer to them by name later. So they want anonymous functions. Okay. That's, that's great. Luckily, in 2020, a lot of languages have anonymous functions. JavaScript does, I mean, you could even say that Java does. Now in a certain sense, most functional languages obviously have anonymous functions. So that's progress, I guess, progress. Snd also good. That means you might be able to do stratified design in these other languages besides lisp. Okay.
"2) forbidding procedures to be returned as the values of other procedures."
Now, when I read this, I keep thinking about the use of the word procedure. They're not using the word function. And I wonder if it's just a trend now to use the word function, because you know, the JavaScript keyword is function. You know, there's like just a lot of stuff in the air. We're calling them functions, C called them functions. They're calling them procedures, which I really respect because that's what they are.
They're step-by-step. They're imperative, you know, at least in Scheme. I know other languages like Haskell do aspire to have the feature called functions is supposed to be much more like the mathematical notion of function.
But in scheme, they don't do that. So I really appreciate using the word procedure here. And so number two, they're saying that a lot of languages don't let you return functions from another function. Right. Return a procedure from a procedure. Luckily we have that. Also in a lot of languages.
So let's go to
"3) forbidding procedures to be components of such data structures as records or arrays."
I mean, really the three restrictions that they've brought up that are common, they say, maybe they're not so common anymore, and they're really just all about first-class functions. They're saying we need first-class functions.
We want first-class functions. That is great. I feel like we've made a lot of progress, and I've said this before. I think that JavaScript is the best thing to happen to functional programming in a long time. Just because of this, right? We have first-class functions. People are used to doing anonymous functions all over the place, returning functions, taking functions as arguments.
It's great. All right. So they're gonna start contrasting the stratified design with what they're calling the top down structured design. Okay. They're calling it a well-publicized programming and methods methodology. I haven't read these papers. But I, I do know that one thing I was taught in school was you have this program to write. And so you break it up into steps and then each of those steps becomes a procedure. And then those procedures become a bunch of steps with each step is a procedure and you just kind of take it from the top down. Like what are the things that need to happen?
And then, well, that's kinda hard. It's not a one liner. So let's make it a procedure. And then that's not a one liner. So let's make it, you know, four procedures and. You just break it up from the top down.
Okay, so they don't like that.
"The methodology is flawed. If a system is to be robust, it must have more generality than is needed for the particular application."
I think this is really interesting, a really interesting statement. If it wasn't in a dense, published peer reviewed article, you might just think, Oh, this is just, you know, we're you so used to writing, reading blogs that were just kind of like first drafts?
Right. So I'm not sure about Harold Ableson, but Gerald Sussman was an electrical engineer and they have a law, I can't remember the name of it, but you're supposed to be a better signal generator than what you can receive. And that cleans up the electrical signal through a circuit, right?
So let's say you're a resistor and some component that is feeding you voltage is noisy, right? It's supposed to be a clean plus five volts, but it's like wavy, right? It's not clean. You're trying to make your resistor able to deal with that, but also output a clean five volts. You don't want to pass the garbage through. You want to clean it up and. This is related. The idea of control systems, cybernetics, that you've got to have a more flexible system. To be robust, the system has to be more flexible than the application it's dealing with. Right? Otherwise, you don't get control if you are less flexible.
"The means of combining the parts must allow for after-the-fact changes in the design plan as bugs are discovered and as requirements change."
Okay? So we have a clear benefit here. They're saying we need to be able to change the design as bugs are discovered and requirements change. Okay. This is 1987 just for context.
"It must be easy to substitute parts for one another and to vary the arrangement by which parts are combined."
Okay. And so they're saying that, okay, wait, let me finish.
"This is necessary so that small changes in the problem to be solved can be effected by making small changes in the design."
Small change should be a small change. Small change in the system means a small change in the design there. This is still that same paragraph where they are saying that the top down structured design. doesn't work. You cannot take a system, break it down through procedural abstraction like that, and then vary the design very easily. That's what they're saying. That is their assertion. What you do is you make a system that does exactly what you thought it was supposed to do at the beginning, and it doesn't have more generality.
It's not more flexible than the system it is programming. It's probably either exactly as flexible or a little less flexible. And so now as your requirements change, you learn something new. A bug happens. Your code, which did exactly the right thing is now. Not able to, I mean, you can change it, but it'd be a lot of work for even a small change.
Okay. So now this was their sort of antithesis. Right? So now we're starting into what is stratified design.
"To this end, expert engineers stratify complex designs. Each level is constructed as a stylized combination of interchangeable parts that are regarded as primitive at that level. The parts constructed at each level are used as primitives at the next level. Each level of stratified design may be thought of as a specialized language with a variety of primitives and means of combination appropriate to that level of detail."
This is okay. I mean, this is a money sentence right here. Money paragraph. Okay. Because what they've done is they've taken their definition of what a programming language has. Three things. And they've said, we can stratify this. We can say, well, one level defines the primitives. The next level defines new means of combination that are named, and then those named things become primitives at the next level. Okay. And stratified means layers layered.
They're saying low levels, layers, levels. I think they're using them interchangeably here. Okay. So each level is constructed as a stylized combination of interchangeable parts that are regarded as primitive at that level. So you've got your primitives that are defined on one level. Then at the level you're programming at right now, you combine the and name them. And then at the next level, those things that you just named are the primitives. That's the stratified design. Okay, so you're basically inventing languages at each level, what they're calling languages, because you have the means of combination, you have the primitives, the means of combination, and the way to name something.
Okay. They talk, they give an example of electrical design and you know, I'm not really familiar enough with that to really understand the digital circuits, so I'm not going to read it.
"The real power of lisp is that it's unrestricted. Abstractions support the construction of new languages, greatly facilitating the strategy of stratified design."
So this is it. This is where they're explaining the title. They're saying the main benefit of Lisp is stratified design. The main benefit, you know, what is, you know, people ask me this about Clojure all the time. Well, what's it, why? You know, what problem is it particularly good at?
This comes from this idea of say Python. Oh, it's good for machine learning. Right. Or JavaScript. Oh, well, it's good, you know, for doing webpages. What is Clojure good for? Well, you know, it doesn't have an easy answer. Right. But this is an answer. This is an answer. It is their answer. The real power of Lisp is that it's unrestricted abstractions.
Basically they're saying first-class functions support the construction of new languages, greatly facilitating the strategy of stratified design. So there you go. It's a language for, you know, we might call them DSLs now. But it's a language that lets you build new languages and they actually, in the paper, they go over two different ways that you build two languages.
Wow. I'm only on page two. Okay. So that's just because that's a very dense introduction. I read a lot of that. Okay. So I'm gonna start skipping around here in this.
This is section one. This is expressing abstractions as procedures.
"Procedural abstractions can help elucidate a process by allowing us to express it as an instance of a more general idea."
What they're saying here is that when you solve a problem with a procedure, you are solving a general idea, a general process. Right? And you can take that procedure. And actually if the, if you have, let's say you've written a program to do something a certain way, let's say using, I'm using a for loop, you can actually extract out that four loop.
Into map, for instance, which is a more general idea than that specific instance. A for loop that you used. It's not more general than a for loop in general, because that's a, that can do anything, but it's more specific. map is more general than that particular use of the for loop, and now you can use that procedure map in place of that for loop.
So you're extracting out a more general idea. And notice this is subtle, but if you're thinking of building layers out of the stuff under it, the more general stuff is at the bottom, right? You're extracting map out, since this function calls map as a primitive, it has to be above it. And so map is now the more general idea lower down.
And this makes sense if you follow that logic, like all the way to its conclusion. So map is written out of for loop, which is more general. And the for loop is compiles to machine code, which itself is more general, like that's what the machine knows how to do, and then that's built out of NAND Gates and stuff.
That's even more general, so it makes sense if you follow this down, that general stuff goes to the bottom. I think one of the confusions is that general and abstract tend to be used interchangeably, but here what they mean by general is you can use it for more situations. Abstract doesn't really mean you can use it in more situations.
And they have a specific definition of abstract, which means like an abstract concept. You know, you see people holding hands and people kissing and you'd say, Oh, that's all love, right? So you, you've named this thing that abstracted away all the details and now you can talk about love as a thing.
That's what they're talking about. Now, love can be part of your philosophy. You can write books about it. It doesn't have to be any particular person and their love. Okay. All right.
So they go into a square root implementation. And I'm just gonna I'm not gonna read that. You should read the paper, but I'm not gonna read it here. I don't have much to say about it. But they mentioned that it "is not built out of components that can be easily isolated for use in solving other problems". They solve the problem. It works. They have a procedure that does what it needs to do, but it is not built out of components that can be easily isolated.
So they proceed to isolate little pieces of it, pull those out into more general functions that are. Reusable for solving other problems.
Okay. So after doing that, they have a couple of other functions that are now reusable. They can use it in other situations.
"The advantage of this formulation is that it decomposes the method into useful pieces. The two pieces are finding fixed points of general functions and using damping to encourage convergence. These ideas are formalized as procedural abstractions identifiable units that are available to be used in other contexts."
Okay. They're now saying this is how mechanically it is done. You have some problem, stuff gets pulled out underneath, and now there are primitives that can be used at this higher layer, right?
They're trying to explain this idea of abstraction, that this is how abstraction can give you some kind of power, some reuse, a general purposeness, right? And you're actually expressing a design, a deeper design. Now we're getting into stratified design.
This is section two. Okay. All right, so this describes this whole like picture language.
It's really interesting, really fascinating, but I don't want to go into the picture language, but I do want to talk about some points that they make within that. So just a little bit about the picture language.
"There is a language of primitive pictures that are constructed from points and lines. Built on top of this is language of geometric combination that describes how pictures are placed relative to one another to make compound pictures. Built on this is a language that abstracts common patterns of picture combination."
All right, so what they're describing here is actually four different levels. So first there's these points in lines, however those are described. And then on top of that, there's a language of primitive pictures that are just, you know, points and lines. Just drawing little things. Okay. On top of that, this is the third level. There's a language of geometric combination. So stuff like, put this next to this, but this on top of this, you know, flip this one and, and, and flip it and mirror it. So you have the, the two next to each other. And then four, a language that abstracts common patterns of picture combination.
So they have all sorts of cool stuff, like, you know, find the fixed point of spreading this out and it like gets smaller each time. It's, it's pretty neat. They have four different levels that this is abstracted into. I'm not going to go into all of those different operators they have, but here's an important idea that they bring up.
"In this, the set of all pictures is closed under combination."
Okay. They have this function called beside. A procedure, sorry. A procedure called beside that takes two pictures. Okay. But it makes a new picture, right? It's closed on in the abstract algebra sense that he argument types are the same as the return type, which means you can use the result of a beside in another beside or in any of the other combinations.
And this is this kind of snake eating its tail that gives it a lot of power. I can use these combinators, these ways of combining pictures, and use them and compose them up into bigger, larger structures. And I'm never changing types, right? This means that all of them are still available. I can use them recursively. It's just a very nice, elegant property that you never leave the sphere of pictures. In this whole system. You know, first there's the points in lines. I think that those aren't pictures, but once you've got at the second level of language, a primitive pictures, third level and fourth level are also pictures.
They're just means of combining those pictures. All right, so there's two properties here.
"The combinator language derives its power from the closure and abstraction properties of the combinators. That is why it can describe seemingly complex figures using only a few simple ideas."
Alan Perlis said something like, I'd rather have one data structure operated on by a hundred functions than a hundred data structures operated on from one function or like 10 data structures operated by 10 functions each. This is that same idea that when you're operating on a specific data type and you've got all these operations that work on it and return values of that same type, everything you make can be combined in more ways. It is an exponentially growing system of combinations that you can do. And so at each level, you're actually increasing the design freedom. You're increasing the number of combinations that you can perform with these primitives. Okay. That is what they're talking about.
The power comes from abstracting, meaning I can make a new thing, refer to it by name and pass it to arguments. It has the same status---it's first-class---has the same status as the built-in ones. And because it's closed, all the stuff that already exists for operating on pictures can operate on this new thing I've made, and so you're just exponentially increasing the number of possible combinations.
Okay.
No, we've got this, and remember there's two properties. I think I should make that clear. There's the closure property. And then there's the abstraction, which is that you can, you can make stuff that has the same status as the other things. It can be primitive. It can act in the same way as primitive things.
And then closure, which is that---closure with an S. right. I know, as I said, I've said Clojure already with a J. This is closure with an S, that a combined two pictures, you get a picture out. Okay, so these two properties are what give you this explosion of possibilities.
*"We can vary the pieces at any level. We can change the location of a point in the primitive picture leg, [which they defined somewhere]. We can replace the compound picture animal by some other basic repeated unit. We can replace triangle by some other combination to be pushed, or we can replace push by some other transformation of combinators."*
All right, so what they're talking about here, it's another important idea. It's a different idea that you can change one piece of this. So, okay, you build up this complex combination of things, starting from primitives at the lowest level, all the way up to stuff like the fourth level. And at any level you can choose to change something.
I want to change the leg of this cow. Right? You're drawing a cow and you're doing this MC Escher thing where it's like mirrored and rotated and tessellated and shrunk down and pushed to the side and like, so it's this, it's this weird pattern, complex pattern, and I want to change the leg. So you just change the procedure leg and now all of the legs change the patterns still persists.
They haven't changed the pattern. You've just changed the leg. Or you can replace the animal. You'd say, I don't want a cow. I want a dog. All right. Now you can replace triangle, which was one of the Combinator is that made a corner. Okay. That made like it took, it took like a square drawing and then shrunk it down and repeated it around the edges of that.
On one side, so it made a bigger square out of the smaller pieces and then push, which took the fixed point of that, which basically took that and applied it again and again and again until, you know, you're down below the pixel level and you can't see the pattern anymore.
So this is what they're talking about, about being able to change the requirements and the design. So much of their code can stay the same, not have to change it. You can totally change the pattern. The animal, the how the animal, you know, leg is, is drawn, all that stuff. Without changing the rest, I can just pinpoint a different level of abstraction.
Now, I was listening to a podcast with Donald Knuth recently, and he was talking about how he believes that that is one of the requirements for being a good programmer, is to be able to move up and down these different levels of abstraction very freely. So you write a program and you have to think, wait, is this a problem in the compiler or in my, you know, basic data structures?
Or is this as a logic problem at some other level? Or is it an off by one error? Because you've got this huge stack of abstractions that you're dealing with. You have to be able to move up and down that and diagnose problems freely within that. And so.
Likewise. When you're making design changes, you have to think, wait, I don't need a change. Most of it, I just need to change the leg. Right. All right, so that was what, what they are calling stratified design using procedural abstraction. Okay. You can make a new language just out of procedures, because we'll remember what they call a language has three things. Primitives, a means of combination, and means of abstraction. And if you have those three things, you have a language, which is why they were calling the different levels, different languages, okay. But all they were doing was defining new functions, new procedures.
Now they're getting into a section called metalinguistic abstractions. Now, the first time I read it, I was just, Oh, yeah, metalinguistic abstractions, but then I was like, wait a second, what does that even mean metalinguistic abstractions? Like, is my mind going to be blown like, is this, am I ready for this? Do I need to sit down? But it's actually a thing that we do in Lisp all the time.
And we're going to go into it. It's actually a pretty cool little section here. This is section three, if you're following along.
"For some problems, the appropriate means of combination may be awkward to express his compositions of procedures, towers of abstractions may not suffice."
Okay. Just awkward. Its saying, yeah, this doesn't always work. These building up procedures on top of procedures. Then they talk about different kinds of programming paradigms. They mentioned object-oriented, imperative and logic.
"Then no single paradigm is sufficient. Large systems typically have some parts that are naturally described using one style and other parts that are more naturally expressed in other ways.
"Part of the wonder of computation is that we have the freedom to change the framework by which the descriptions of processes are combined."
Here's a mindblower here.
"If we can precisely describe a system in any well-defined notation, then we can build an interpreter to execute programs expressed in the new notation or we can build a compiler to translate programs expressed in the new notation into any other programming language."
That's the wonder of computation, right? This is like back to Turing completeness. Right? Any Turing-complete machine is able to translate, is able to be run, interpreted, or compiled to any other system, which is awesome.
And it's wonder, right, that once you hit a certain set of features, boom, you're done. You got, you got everything you need ever. You're never going to be able to do better than that. Which is cool. But he's saying something, they are saying something significant here, besides just being philosophical, that a large system typically has some parts that are naturally described using, say, imperative programming and other parts that are more naturally expressed in object-oriented or in logic or something like that.
All right, so this is what they're talking about when they talk about metalinguistic.
"When we design a system that incorporates an application language, we must address metalinguistic issues as well as linguistic issues. We must not only consider how to describe a process, but also how to describe the language in which the process is to be described. We must view our program from two perspectives. From the perspective of the interpreter or compiler, an application program is merely data. An interpreter or program operates on that data without reference to what the program is intended to express. When we write programs in the application language, we view that same data as a program with meaning in the application domain."
Okay, so this is what the metalinguistic is. You're not just programming. Now you are thinking about how like designing a language, right? How do I describe the program? What is the language in which I can describe this program that is the metal linguistic abstraction? There's two levels. One is I'm just coding. I know the language. I'm just typing the syntax out. That's, that's what I do. The other level is I'm now designing how that language looks, what it's going to do all these things. These are metalinguistic and at that level, your program is data. The stream of bytes, I have to feed it to a parser cetera.
All right?
"Interpreters and compilers are just programs, but they are special programs. An application program to be interpreted or compiled is not a composition of abstractions of the language in which the interpreter is implemented or to which the compiler translates. This linguistic shift transcends the limits of abstraction."
Okay. So what they're saying is there are things that you can't do just by making new functions and naming them. Okay. There are, you can shift to a new paradigm, a new language that you couldn't do with just some functions that you combined, right? You're getting something else. A totally new paradigm--- some new abstraction, meaning there's details you can now ignore because you're letting the interpreter a compiler deal with. All right.
That brings up a good point. Abstraction, which they, they talk about only in these titles, like metalinguistic abstraction, in another sense, it's not just naming, okay.
In this other sense of abstraction, because there are multiple senses. But in another sense, abstraction is, treating two things that are different in the same way. Okay? So I said before, two people holding hands, they love each other. Two people kissing. They love each other, different people. They are different phenomena. They are different people. They happen at different times, but we treat them the same. They're both love. Okay? That is what we can do with abstraction, we're ignoring the details. Now when I'm talking about love, I'm not talking about those two particular hand holders. Talking about everybody. You know what I mean?
It doesn't matter. You could change people, right? Any people holding hands? This is abstraction. This is, we do this all the time. We don't even think about it. And so they're talking about this here. So it says "the interpreter or compiler operates on that data without reference to what the program is intended to express."
So they are able to, at this low level of the interpreter, they're ignoring the details of what the program is even supposed to do, right. It's just like, know, look, you give me some bytes. I parse them. I do what it says. I don't care what this is for. It could be, you know, curing cancer, or it could be sending missiles.
Like, that's not my job. I don't, I don't, I don't care about that. But then at the other level, you know, in the program when you're writing your application, you certainly care about whether you're curing cancer or launching missiles. But what you don't care about is what machine code is this going to get compiled into?
Ah, that's the compiler's job, right? So you're able to ignore these details, and that is a kind of abstraction. The different kind from what they usually talk about in this. But it means that each layer---this is me going a little past where the paper goes---but each layer in this stratified design is a different level of abstraction. It has different details that you focus on and details that you can ignore. That's just my little bit of extrapolation there.
They make this statement "Lisp's facility with symbolic data provides unusually good support for developing such subsystems." And they're talking about interpreters and compilers.
Okay? In Lisp, we have the symbol data structure. We have list data structure, and that gives you quite a lot to work with for writing interpreters. They go through a rule language in which you express a simplification of algebraic expressions. And it's interesting, I'm not gonna read it, but then they write the rule interpreter.
So they described the language. Then they describe the interpreter and again, they say
"Programming a simplifier in the rule language allows us to focus on the rules themselves without becoming distracted by concerns about the strategy and mechanism by which rules are applied."
So this linguistic, this metalinguistic abstraction, is this second kind of abstraction and not the kind of abstraction they talk about in the first part of the thing, it's an abstraction where you get to ignore details. It's not about naming and manipulating the thing at a different level, it's about being able to ignore details.
I can write these rules. I don't need to know the implementation of the rule language interpreter. And then likewise, I can write this interpreter. I don't know what it's even going to be applied to. And in fact, they talk about a different kind of a rule language that could express other kinds of simplifications, not just algebraic ones.
It's really interesting. You should read this, but I'm going to breeze through, and move on to the conclusion. I'm going to reread the same paragraph that I read at the beginning. So this is in the conclusion.
"People who first learn about Lisp often want to know for what particular programming problems Lisp is the right language. The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm as in the rule language, or it might be a collection of procedures that implement new primitives, means of combination, and means of abstraction embedded within Lisp as in the drawing language."
"Perhaps that is why Lisp, although the second oldest computer language in widespread use today [only Fortran is older], still seems new and adaptable [1987 remember] and continues to accommodate current ideas about programming methodology."
And it was true. They were very prolific with compiler, design, and you know, all sorts of stuff when they were doing scheme. Okay. So this is A note on Scheme. It's like after the conclusion. Number five,
"Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions, then make additional features appear necessary".
So this is their philosophy of Scheme right here, which is actually really nicely put. A scheme is a very minimal language, and they, they just made it so that we have first-class functions. One of the innovations they came up with was a way to compile functions more efficiently so that function calls were very cheap.
Subroutine calls in languages up until then were actually really expensive. People would save all the registers and you know, it was like, okay, we're getting ready, we're going to jump all the way over there. We got to be able to get back here, so let's save as much as we can. And they developed the way that makes function calls cheap enough that you can just make a one liner or they define functions that just pass through to other functions and like, it's cheap. It doesn't cost much. A lot of that is tail call optimization, that kind of thing. But thanks to them, we now have very fast function calls, and that's a paper for another day.
Well, think that's the end of this paper was called Lisp: A language for stratified design by Harold Ableson and Gerald J. Sussman published in August of 1987. An old paper now, and a good one. And I learned a lot about stratified design, what makes lisp special. I hope you did too.
You can subscribe to this podcast in Spotify, Apple podcasts, Overcast, whatever you like. Just search for Thoughts on Functional Programming.
You can also find. All those links at lispcast.com/podcast. There, you'll find this, and all of the old episodes. Just a note, this is the start of season three. So it's a big format change. I used to riff for short times on small questions, important questions in functional programming, but you know, in 15 minutes I could answer it.
And now I'm going into these papers. I feel like these papers need to have more light shined on them. And so here I am with a light and I hope you like what I'm doing. Let me know. You can get in touch with me. If you go to lispcast.com/podcast you'll find links to get in touch with me on social media.
Well, this has been my thought, my big thought on functional programming. My name is Eric Normand. Thanks for listening, and as always, rock on.