The Next 700 Programming Languages

In this episode, I excerpt and comment on a seminal paper in programming language design, from all the way back in 1966, called 'The Next 700 Programming Languages' by Peter Landin.

Download the paper

Transcript

Do the idiosyncrasies in language design reflect basic logical properties of the situations that are being catered for, or are they accidents of history and personal background that may be obscuring fruitful developments? This question is clearly important if we are trying to predict or influence language evolution. To answer it, we must think in terms, not of languages, but of families of languages. That is to say we must systematize their design so that a new language is a point chosen from a well-mapped space rather than a laboriously devised construction. To this end, the above paper has marshaled three techniques of language design, abstract syntax, axiomatization, and an underlying abstract machine. Hello, my name is Eric Normand, and this is my thought on functional programming. In today's episode, I am reading a truly influential paper called "The Next 700 Programming Languages" by P.J. Landon published in 1966, so a very old paper. I bring up the date because there is going to be a lot of references to things that were happening at the time, and I might have to refer to some of those. What's interesting is that Algol 60 had come out since that was done, and that was the main language that people were using in publications, so when computer scientists were sharing their algorithms and things, they were using Algol 60. This paper is about a programming language, actually a programming language family called "I Swim" that Peter Landon was working on, and this paper describes the reasoning. When I read a paper, there's often a central question that is trying to answer, and I really think that this paper is trying to establish an idea of language design that we can systematically come up with a good set of linguistic constructs for our language and a general-purpose language at that. We'll get into how he does it in this paper. As usual, I will be reading excerpts from the paper, and I will be commenting on them. I am not a serious expert in the history of these papers and the field. I wish I were the part of what I'm trying to do is develop that kind of expertise and make it more readily available, but just remember, in 1966, the LISP paper, which he references in here, so it's very relevant, the LISP paper that introduced LISP to the world was in 1958. This is eight years later. This was already well known, Algol was out there, people were publishing papers in it, and so this is somewhat early in the field of programming languages. Reading from the introduction, most programming languages are partly a way of expressing things in terms of other things and partly a basic set of given things. The I-swim, if you see what I mean, that's what it stands for, if you see what I mean. System, the I-swim system is a byproduct of an attempt to disentangle these two aspects from in some current languages. The current language, those are the language that he's writing, but he's trying to disentangle two things. How do you express things and what things do you have that you can put together in those expressions? This attempt has led the author to think that many linguistic idiosyncrasies are concerned with the former rather than the latter. I really don't like when authors do this, the former one, because you have to refer back or remember, the former one is a way of expressing things in terms of other things. The latter is the basic set of given things. He's saying that many linguistic idiosyncrasies are concerned with how to express things in terms of other things rather than the basic set of given things, whereas aptitude for a particular class of tasks is essentially determined by the latter, basic set of given things rather than the former. The conclusion follows that many language characteristics are irrelevant to the alleged problem orientation. What he's saying is that a lot of the language design stuff, a lot of idiosyncrasies, what makes one language different from another are more about how things are expressed in terms of other things, whereas what makes something good for one purpose or another is usually in the set of things that you're given that are part of the language baked into the language. For example, if you are doing a lot of mathematical computations, you want your language to have some given things like a rich set of numbers and arithmetic operations. That doesn't really talk about how things are combined, how do you make an expression out of those arithmetic operators? That's kind of secondary. He's saying that usually people are focusing more on this, how do you combine stuff and not enough on what are the things you're dealing with? At least in terms of differences in languages, maybe we don't need to be focusing on having different syntaxes, that's what that is, like how do you express stuff, that's the syntax? We don't need so many different syntaxes, what we need is a better use of these primitive things that are given. Iswim is an attempt at a general purpose system for describing things in terms of other things, so that's general purpose, that can be problem-oriented by appropriate choice of primitives. You can move into a different domain where, let's say you're doing a lot of string operations, now strings are given as primitives. In 1966, I imagine, like I said I'm not an expert at this, I wish I were, in 1966, this could be the first paper that really explains a number of things. One is this idea of there's a part of your language that could be general purpose and the other part, which is more specific domain purpose. Think about this, this is like the leading directly to Java, where you say we're going to give you this general purpose way of making classes and calling methods and defining methods and stuff like that. Some basic numbers and things like that that you're going to need, but then you define the rest as domain-specific primitives that you can construct and combine in two ways. This idea of separating out the general purpose from the specific purpose is really great. The other thing is he's kind of separating out the syntactic idiosyncrasies from the semantic idiosyncrasies. What makes a language different is not about whether you use, well in the syntax way, it could be about whether you square brackets or curly brackets, but that doesn't really help you express a problem better. What helps you express a problem better is the meaning of the things that you're given. He starts to break down stuff, he's giving this example of in section two, the where notation. This is a way of introducing a variable in a very tight scope. You can say X plus one, where X equals two. Obviously that turns into two plus one because X plus one, two plus one, turns into three. He wants to analyze this. This is part of I swim, this where clause. He proposes four different, it's sort of a laundry list of things to ask about this construct. One is its linguistic structure. The second one is the syntax, third is the semantic constraints on the linguistic structure. Make sure it still is meaningful, and then the outcome. It's not really one of those lists that's a model where you have these mutually exclusive categories, and together they cover the whole domain, and you can kind of know, yes I've got it all. It's more like a laundry list, sort of like vague categories that you divide stuff into. It's less a model and more like a checklist. Under a linguistic structure we have questions like what kinds of expressions can you put as the body of the where clause, and what stuff can go in the right hand side of the assignment where you define the variable, and what goes on the left hand side, basic questions like that. And there's the syntax, what brackets do you need, where are they optional, do you have any line breaks that you need, are there punctuation, that kind of thing. And he makes a note here, note the separation of decisions about structure from decisions about syntax. It's very intriguing that he would have to call this out at that time, because by the time I started studying computer science, that was pretty well understood. But this is 1966, and he thought that it needed mentioning. Okay, and then semantic constraints on the structure is maybe you need to have like types, and so if this expression is about numbers, then the body has to evaluate to a number, etc. And then the outcome. So this, I don't understand why he's using the term outcome, but he's talking about like what happens if you nest them like in multiple nested wares, and like, does that change the meaning? And he's saying that you kind of have to think about that, you can't just leave it to, leave it to chance or, you know, leave it undefined, what happens if you have like name conflicts and stuff like that. Because it's going to happen. Someone somewhere is going to write these nested crazy wares clauses. Okay, so, and I think that that's well understood too, like why not make it more regular and let them nest, because maybe someone needs it and you can't think of why. But I can also imagine, back in this time, this idea of like, oh, allowing infinite nesting, like how much memory is that going to take? How can we build data structures that can handle this like infinite nesting? So it is a concern, but he's saying we should allow it. Okay, he makes this distinction in section three. I'm just going to summarize it because there's not much that's very quotable there. He makes this distinction between physical eye swim, remember eye swim is his like family of languages, physical eye swim and logical eye swim. So logical eye swim is the, I like to think of it sort of like the abstract syntax tree, but it's not quite that. The physical representation is the physical eye swim is if you had to like print it in a journal, in a scientific publication, you might use a different character set from if you, from, from, if you did it on a terminal or on punch cards or something like that. And so he wants to distinguish that it's not, the language is not defined by the physical medium that it's printed in, which is, it's, it's a good idea. And Algol had something similar where they had what they called reference language. And there would be, you know, every, the computers back then didn't have standardized keyboard. So you couldn't really rely on a particular like easily typeable character set. But still it, it, even though today we do have that, it's still a good idea to think of things that like this separate way. Okay, so four, section four, four levels of abstraction. The physical/logical terminology is often used to distinguish features that are fortuitous consequence of physical, physical conditions from features that are in some sense more essential. So there's this idea that there's this essential core, like regardless of the character set you use, if you have like a fancy arrow or some other things, that's not what's essential to the language. So he's breaking this up into four levels of abstraction, which is, is sort of like, you know, you would break up a network stack into different layers. So one is physical eye swim. This is the one that gets printed like in the paper or is on your, you know, computer screen to logical eye swim, which is, it doesn't really know how it's going to be printed, but it is somehow represented in memory with the correct grammar, right? Then there's abstract eye swim. This is number three. It's like a tree language, so it's like an abstract syntax tree. And then four, applicative expressions. So this is where you have a sort of like a virtual machine, this small kernel of the language that everything else compiles into, okay? The set of acceptable texts of a physical eye swim is specified by the relations between one and two. So that's physical and logical. So how do you translate between physical to logical and back and between two and three between the logical and the abstract? So you could roughly think of it as like a tokenizing step and then parsing step gets you to three, right? You have a physical, you tokenize it, it changes the characters into, you know, keywords. And then you take those keywords in a sequence and you parse them and it gives you an abstract syntax tree. The outcome of each text is specified by these relations together with a frame of reference, i.e. a rule that associates a meaning with each of a chosen set of identifiers. Okay, so then you have to like look up the identifiers. These are the primitives and whatever that are given in the language so that they can convert to a certain meaning for that particular program. The specification of the family is completed, remember, we haven't been able to translate to four yet, right? We've only done one, one to two to three and not to four. So the specification of the family is completed by the relation between abstract i-swim and a-e's, those are the applicative expression, sorry, together with an abstract machine that interprets a-e's. These elements are the same for all members of the family and are not discussed in this paper. So, this abstract machine, this abstract way of mechanically running the applicative expressions is defined and is the same for all family, all of the languages in the family. So this is very much an early kind of universal virtual machine, like a JVM or like a small talk virtual machine. And I think it's so fascinating that it comes so early. The relationship between physical i-swim and logical i-swim is fixed by saying what physical texts represent each logical element and also what layout is permitted in stringing them together. The relationship between logical i-swim and abstract i-swim is fixed by a formal grammar, and i'm like the one in the algal 60 report. These two relations cover what is usually called the syntax or grammar of a language. Now, it's really interesting here, I'm going to comment a little bit, and when we analyze English or other spoken languages, natural languages, we often talk about the difference between syntax and grammar, and I don't know if people do that so much anymore in programming languages. So just because you can parse directly from like a stream of bytes into the abstract syntax tree, like you can do it all in one step. There's no real need anymore because we have so much memory and processing power, you can backtrack for the whole program if we need to. There's no real need to separate it up into steps like we used to. And so I wonder if this is just like old-fashioned thinking at this point, and if they aren't borrowing, if land wasn't borrowing a little bit too much from natural language where you might have some, you know, you parse the sentence into words and then you kind of try to tag them and say this is a noun and this is an adjective, this is a participle, and you know, you try to tag it to give you more information for how to parse it. Like, I don't know if we need to do that so much anymore. So I don't know, it's just a note. I wish I could say definitively whether this is just old-fashioned thinking. Now one cool thing about this layer system is it does give you a nice kind of pipeline of how to build an interpreter or a compiler, right? You start with tokenizing and then, or what, you know, became later known as tokenizing, he doesn't use that word. But you tokenize and then you parse it and then you turn it into an abstract syntax tree and then you turn that abstract syntax tree into a, into this virtual machine abstract representation that can be executed directly. And it gives you like a real glimpse that this is, this is where like modern language programming language theory gets all these ideas of doing stuff in these, in these particular stages because anyone can come up with like, oh, we should do it as a pipeline. But the fact that he got these particular stages in there, it's the choice of the steps in the pipeline that's really fascinating. Okay, so that kind of goes, um, that goes over a lot of the background of what I swim is and how it's defined. In section five, I'm not going to talk about it too much, uh, it's because it's, it's not good podcast material. It's, it's just, uh, mostly this, this big grammar definition that he has. But he basically gives the whole grammar of the language, uh, in section five. And it's, it's fascinating to see how simple a general purpose language can be. If you just consider we're just going to like push off all the specific stuff till later. Right? Um, okay. Section six is really a fascinating section to me. Now, it's called relationship to LISP. Remember at this point, LISP is eight years old. It's not what we know of as LISP today, right? This is LISP, uh, kind of a hacker's paradise system that, uh, is mostly in universities dealing with artificial intelligence problems. Okay. So just keep that in mind. Uh, but still there's a lot of good stuff. I also want to read it for, uh, historical purposes. Okay. I swim can be looked on as an attempt to deliver LISP from its eponymous commitment to LISP. It's reputation for hand to mouth storage allocation, the hardware dependent flavor of its pedagogy. It's heavy bracketing and it's compromises with tradition. Okay. And then he's going to, this is just a list, again, a laundry list. He's going to go over each of those. Okay. So one, I swim has no particular problem orientation. Okay. It's not about list processing. It's not about, um, numerical work or commercial programming or anything like that. Okay. So there's no bias in it. Two, outside a certain subset, I swim needs garbage collection. Okay. So apparently in LISP at that time, garbage collection was not as universal as, uh, as it is today, whereas we think LISP is like one of the defining languages in terms of garbage collection. Uh, uh, at this time, it seems like, well, maybe people were doing a lot more manual allocation of their con cells. Okay. That's, that's fine. Three, LISP has some dark corners, especially outside pure LISP in which both teachers and programmers resort to talking about addresses and to drawing storage diagrams. You know, that's something that people were doing up until very recently, uh, you know, they were drawing the con cells with pointers to other con cells to show the list structures and tree diagrams and things like that. And, um, you know, he doesn't want people to be doing that. Um, it's not about pointers, basically, which is good. It's a good, good movement. Uh, you know, in something like closure, we don't think about that at all. Um, so it's, it's an interest, it's interesting to, to think about that. It's like that was something that he felt like he needed to distinguish himself from, from the work of LISP at that time. Okay. For the textual appearance of I swim is not like LISPs as expressions. Okay. Yeah. Uh, it's a difference in text. That's cool. So then he goes into them, um, auxiliary definitions indicated by letter where, uh, instead of the kind of let that you would have in, in LISP, in fixed operators, instead of prefix, uh, indentation instead of braces. Uh, so just like in something like Python or Haskell, it's using indentation to mark structure. And this might be the first language that did that. Uh, there's a discussion at the end of the paper where the other, you know, big names in the field, they're kind of, kind of like wondering whether that's going to work. Uh, it's a debate we still haven't really answered. I, I have my opinions, but, uh, we can leave that out. Okay. Five. Okay. He's still going through this laundry list of differences with LISP. The most important contribution of LISP was not in LISP processing or storage allocation or in notation, but in it, in the logical properties lying behind the notation. Here I swim makes little improvement because except for a few minor details, LISP left none to make. There are two equivalent ways of stating these properties. Okay. He's basically saying LISP got a lot right. Uh, and it wasn't about LISP processing. It wasn't about storage allocation and it wasn't about notation. Here are the two things. LISP simplified the equivalence relations that under, that determine the extent to which pieces of program can be interchanged without affecting the outcome. And B, LISP brought the class of entities that are denoted by expressions a programmer can write nearer to those that arise in models of physical systems and in mathematical and logical systems. These remarks are expanded in sections seven and eight. So we're going to go over seven and eight. Those are probably the most highlighted sections in here that I've got. But just want to comment. So a might not be so clear from the way I read it, but he's talking about equivalence relations, meaning this expression is equivalent to this other expression. And that becomes clear because in a lot of languages that are statement oriented, there's almost, I don't want to say almost none, there's very little, there's less that you can say about equivalence between groups of statements than there is between two nested expressions. You could say that two expressions are equivalent even though their program text is different. You can do that because as we'll see, you can actually define equivalence as very systematic basically. And then be that it gives you a more natural way of expressing like mathematical or logical systems because that's how people work in math. They do expressions, it's not statements. Okay, seven. So you know, you should just imagine compared to something like Fortran, where you're doing a lot of four loops and updating variables in place each time through the loop. Like it's not the same way you think of it as in a mathematical expression. Like, let's say you have a loop that's summing some numbers up in a mathematical expression, you would use sigma notation and it'd be very clear like I'm summing these numbers. Whereas in a loop, you got to read what is it going on in this loop? Okay, we're incrementing this and adding to that and there's a lot of things going on that's not as natural. Okay, seven, the characteristic equivalences of I swim. For most programming languages, there are certain statements of the kind. There is a systematic equivalence between pieces of programs, program like this and pieces like that that nearly hold, but not quite. For instance, in Algol 60, there is a nearly true such statement concerning procedure calls and blocks. Okay, I need to go through this. He's talking about different language features that are almost the same. He mentions Algol 60 procedure calls and blocks. If you make a block, let's use C as an example or Java, same thing for this example. You can make a block, you just open curly braces and define variables in there and that block creates a scope where those variables are defined. You could also create a function or a procedure and call that procedure instead of making this block. What he's saying is in Algol 60, those things are kind of different. Why are they almost the same, but it's that difference that makes it really hard to work with. That's what he's saying. And of course, I don't know anything about Algol 60. Might be cool to research this, but I don't know what those particular differences are, but it doesn't matter. Here we go. "The author believes that expressive power should be by design rather than accident and that there is great point in equivalences that hold without exception. It is a platitude that any given outcome can be achieved by a wide variety of programs. The practicability of all kinds of program processing depends on there being elegant equivalence rules." There's a lot in here that's unpack, that he thinks that you should design in expressive power, by design, on purpose rather than by accident, meaning you should be able to say the same thing in different ways on purpose. There is great point in equivalences that hold without exception that there shouldn't be this nearly the same. It should be exactly the same and there's a lot of power in that. The argument about needing different features that are kind of different, that helps you with expressive power, he's saying that doesn't hold. That's not true. The fact that they are nearly equivalent makes it harder to express the same thing with different things. At least it makes it harder to analyze whether they mean the same thing. In order to make all these program processing things, I don't know what you call them, program processing programs, for instance optimizing, checking satisfaction of given conditions, constructing a program, satisfying given conditions, all those things that you might want to do programmatically on your software depend on there being equivalence rules. Like actual equivalence, 100% equivalence. There are four groups that he's going to go over one at a time. One is the extent to which a sub-expression can be replaced by an equivalent sub-expression without disturbing the equivalence class of the whole expression. If you have some sub-expression you want to be able to sub out another one for it. Two, user coined names, so how do names relate to the things that they are naming. Three, built-in entities implicit in special forms of expressions. He's talking about if expressions, like can we come up with some rules about how if expressions relate to their sub-expressions and then there's some named entities for specific problem orientation languages in the same family. He gives a bunch of examples, I'll give one, so group one, this is where you can sub-out sub-expression. For instance, he's just, I'll give the first one, he gives them names which is kind of nice. It's called mu and the equivalence is if L, some expression L is equivalent to L prime then L applied to M, another expression, is equivalent to L prime applied to M makes sense. This is the kind of thing you would expect in a mathematical notation. Nothing surprising here, he's just setting them out. These are examples, but it's really nice to have them set out like this and maybe this was the first time anyone sat down and published this kind of thing. Group two, he's setting up equivalences with names, remember, so he's saying that let X equals M is, and then call L is equivalent to L where X equals M, so there's an equivalence between let and where. The only difference is the let comes before and the where comes after. That's nice. That you can express the exact same thing in two different ways. Alright, there's a sub-group, group two prime, where he's showing that you can do a substitution of the name for the actual expression, so L where X equals M, you can substitute X for M in the expression L. That's nice, he's calling this beta because it's like beta substitution. Okay, those are the two groups he makes this comment. The author thinks that the fruitful development to encompass all ice-wim will depend on establishing safe areas of an ice-wim expression in which imperative features can be disregarded. The usefulness of this development will depend on how successfully ice-wim's non-imperative features supersede conventional programming. He's trying to create this purely functional subset of ice-wim so that these equivalences can hold as far as possible in a program. Okay, so in group four, wait until I skip group three, ah, here, group three. So this is like conditional expressions. He's trying to make some equivalences. So he's saying, well, if you know the test is true and then your then is M and your else is N, well, then you know that it's equivalent to M, right? You can just ignore the else and just call the then. If you know if it's false, the test is false, then you can just call the else, you don't even need to do the M. So he's creating all these little equivalence rules. And the nice thing about them is they create kind of an algebra. So your optimizers can take advantage of these rules. Your program analysis can take advantage of these rules. You can take advantage of this rule to improve the clarity of how you express a certain thing. And because they're regular, they can be part of the compiler. Like the compiler can convert these things down, down, down into some like normalized form before converting them to the abstract, ah, no, the applicative expressions. Okay, group four, these are things that you would add in new axioms that you add in for the particular problem domain you have. So he's talking about, um, if you, if your problem domain needs integers, you could define an equivalence for, um, like an equality for integers. And so you say if L equals M, where L and M are integers, if L equals M, that is going to be either true or false. So there's some like, you know, equivalence there. Uh, right, okay. So I think that that's, that is, to me, you know, we talk about stuff like refactoring and things and refactoring is just an after the fact coming up with equivalence, um, rules. Right. You can say, oh, I can extract a method, uh, because I know that in my language, I can either call three lines or I could put those three lines into a method and call the method in their place and it will have the same effect. Right. So this is saying, why don't we design the language that these kinds of equivalences are built in and not, they don't have to be discovered by, you know, by work, basically. Extra work on top. And then you have to be super careful, like, oh, is this really an equivalence thing? Or maybe you have some rules like, uh, in this case, it's not quite equivalent. Okay. So eight application and denotation. The common place expressions of arithmetic and algebra, oh, sorry, arithmetic and algebra have a certain simplicity that most communications to computers lack. Remember the time he's talking about now. In particular, a, each expression has a nesting sub-expression structure, b, each sub-expression denotes something, c, the thing and expression denotes, that is, its value depends only on the values of its sub-expressions, not on other properties of them. It is these properties and crucially see that explains why such expressions are easier to construct and understand. Thus it is see that lies behind the evolutionary trend towards bigger right-hand sides in place of strings of small, explicitly sequenced assignments and jumps. Okay. This is 1966, way before go to consider harmful, this is where most of the time people were programming using assignments and jumps in sequence and it's hard when you look at that to see the mathematical expression that it's trying to implement, right? You're doing a plus, a plus, a times, a times, oh, that's the sum of two squares, right? It's really hard to see that from the steps. The important feature of iSwim's equivalence rules is that they guarantee the same desirable properties to iSwim's non-imperative subset, okay? That's nice to know. He doesn't go deep into that so I don't want to dwell on it too long. What he's talking about though is that we want to be able to create expressions that are arbitrarily nested and just like in functional languages, we want to make sure that we can determine what the value of that expression is going to be based only on the sub-expressions that make it up. It's kind of an obvious thing these days but someone had to write that down and it wasn't obvious at the time. He goes into how the particular rules that he listed in the last section help with this and how you can show that this rule and this rule applied, like create this equivalence class where all these things are equivalent to all these things, it's interesting, you should read it, but I won't go into it. Just know that you've got this idea of equivalence classes that's really, really powerful. Note on terminology, this is section nine. Iswim brings into sharp relief some of the distinctions that the author thinks are intended by such adjectives as procedural, non-procedural, algorithmic, heuristic, imperative, declarative, functional, descriptive. He is making a comment about these terms, a lot of which we still use today. For instance, we might make a distinction between declarative languages and imperative languages, or functional languages and procedural languages. We still use these terms. He is basically saying here's a new word and this one is better. An important distinction is the one between indicating what behavior, step by step, you want the machine to perform and merely indicating what outcome you want. We might recognize this as what we today would call declarative, imperative and declarative. Imperative is step by step, declarative is the outcome you want. Put that way, the distinction will not stand up to close investigation. I suggest that the conditions A through C in section eight, those are the each expression has a nesting sub-expression structure, each sub-expression denotes something, and then the thing in expression denotes depends only on the values of its sub-expressions. I suggest that the conditions A through C in section eight are a necessary part of merely indicating what outcome you want. This is a necessary part. To be declarative, this is a necessary part. I have a podcast where I talk about why I don't like the use of the word declarative versus imperative. I'll summarize by saying that they're on a spectrum, that a for loop used to be considered declarative because you didn't have to write jump instructions, basically. You can always be more declarative depending on the domain you're in. You can always say what you want and not how to get it more. Be more divorced from the machine and it's step-by-step operation. It's only useful in a relative way and not in an absolute way. This is, SQL is declarative because you don't tell it how to do it. That's not true. If you are seriously into SQL, you are telling it exactly what steps to do. You get a feel for how this optimizer will take your SQL statement and turn it into joins and whatever. Same thing with a pipeline of maps and filters and reduces. They're definitely more declarative than for loops. When I do them, I'm thinking this is step-by-step. I'm telling you map this, map this, filter this, filter this, it steps. It's imperative in that way. The terms are only useful in relative sense. What he's saying is maybe this subset, this necessary part, is something we should talk about. The word "denotative" seems more appropriate than non-procedural, declarative, or functional. The antithesis of "denotative" is imperative. Effectively, "denotative" means can be mapped into "I swim" without using jumping or assignment, given appropriate primitives. Like I was saying, he's saying that a better term than declarative, functional, non-procedural is "denotative" because it can actually be described with these three properties that he had talked about before, that each expression has a value and then an expression is made out of sub-expressions that uniquely determine its value, something you can look at absolutely. You can judge it absolutely. A map called to map on an array, that is "denotative" because it is totally determined by the values of the sub-expressions. It has a value itself. There are arguments, and those arguments have values. Those are its sub-expressions, and then you can determine the value that that expression results in by the values of the sub-expressions. And maybe that's what people mean, but they haven't read this paper, so they're not. And they use that other way of describing it, saying you say what you want, not how you want to do it. Okay, but then at the end he says this, "effectively denotative means can be mapped into eye-swim without using jumping or assignment, given appropriate primitives." So look at this, he's saying if you don't use these two features of the language, of the eye-swim language, jumping or assignment, then you are effectively denotative without having to sort of think about expressions and sub-expressions explicitly. It follows that functional programming has little to do with functional notation. It is a trivial and pointless task to rearrange some piece of symbolism into prefix-notated operations, pre-fixed operators and heavy bracketing. It is an intellectually demanding activity to characterize some physical or logical system as a set of entities and functional relations among them, however, it may be less demanding and more revealing than characterizing the system by a conventional program and it may serve the same purpose. Having formulated the model, a specific desired feature of the system can be systematically expressed in functional notation, but other notations may be better human engineering. So the role of functional notation is a standard by which to describe others and a standby when they fail. Whew, okay, that was long, let me go into this. So it follows that functional programming has little to do with functional notation. So you could imagine someone, a purist, saying we are doing functional programming so we should express everything in terms of lambda calculus, right, and in lambda calculus, all you have basically is defining functions and calling functions. When you define a function, you get a variable. So every time you want to make a new variable, you would have to define a function and then call it. And so you could imagine a purist saying like this is the only way to get functional programming and in lambda is saying no, you can actually achieve it even with let's and where's and other things because we have gotten rid of jump and assignment without jump and assignment. It is functional programming. So the next thing he's saying is that it might be easier to write a regular program in I swim than to do it in something like lambda calculus and it serves the same purpose. You can always convert the, you know, the I swim program into lambda calculus using those equivalence rules. But it's much, it might be easier for a person, might be more human engineered to do that. And so it's useful to think about the relationships and the expressions and the sub-expressions. But you don't need to do it all the way to lambda calculus. This is equivalent to the lambda calculus. Okay, it's an interesting idea, right? Like programming languages might be better than lambda calculus for this stuff. Something that we kind of, I guess, take for granted again, but it probably had to be mentioned in 1966. Okay, 10, eliminating explicit sequencing. This is kind of furthering the same idea. So there is a game sometimes played with algal 60 programs rewriting them so as to avoid using labels and go-to statements. It is part of a more embracing game, reducing the extent to which the program conveys its information by explicit sequencing. Okay, so you could take this program, you know, and algal, they used go-to's all over the place, they used assignment statements. And of course, go-to statements and assignment statements depend on the order that you, you know, you run them in. And so the sequence of those steps is really important. And so you could play this game of like saying, I want to untangle the spaghetti and get rid of all these go-to's and assignments. The author does not argue the case against explicit sequencing here. Instead, he takes the, as point of departure, the observation that the user of any programming language is frequently presented with a choice between using explicit sequencing or some alternative feature of the language. You always have this choice. Do I use a go-to that requires explicit sequencing or do I, you know, use a subroutine or something else. Furthermore, languages vary greatly and the alternatives they offer. So algal gives you subroutines but, you know, other languages might have something else. So for example, our game is greatly facilitated by algal 60's conditional statements. That was John McCarthy and Benjamin Lisp who did that and conditional expressions. So without those conditional statements, you would have to jump, you'd have to jump over the then if you wanted to get to the else, right? But with the, you know, conditional statements with the block for the, if and the, and for the then and the block for the else, the compiler figures that out for you. So the question considered here is what other features are there? What else is out there like the conditional statement that lets you avoid go-to's? The question is considered because not surprisingly, it turns out that an emphasis on describing things in terms of other things leads to the same kind of requirements as an emphasis against explicit sequencing. So the ability to say one thing as equivalent to another is equivalent, it's the same kind of requirement as an emphasis against explicit sequencing. We all say different things in different ways from the go-to lets you do that. Now this is kind of a general statement, he doesn't go specifically into it into like what are those things. That is left to other papers, which we might read one day, but I want to call out the lambda, the ultimate papers, especially lambda, the ultimate go-to where the makers of the scheme language show that you can do a ton of things with function calls that normally you would think you need to go to for. So they kind of, they took that and said, okay, we'll show that function calls can replace a lot of go-to's. Ooh, we're coming to the end up in here, okay. So no, that is the end, might read this part of it again. I read this part, this was in the conclusion, I read that at the beginning at the top of the show. Do the idiosyncrasies reflect basic idiosyncrasies between languages, reflect basic logical properties of the situations that are being catered for, or are the accidents of history and personal background that may be obscuring fruitful developments. This question is clearly important if we are trying to predict or influence language evolution. To answer it, we must think in terms not of languages, but of families of languages. That is to say we must systematize their design so that a new language is a point chosen from a well-mapped space rather than a laboriously devised construction. To this end, the above paper has marshaled three techniques of language design. Abstract syntax, axiomatization, and an underlying abstract machine. So a general comment on the paper, obviously we're not all programming in I-swim. So in one way to look at it, you could say this was a failure, I-swim was a failure. And I think you wouldn't be wrong that this idea of the family of languages might have been misguided that we don't really need a family of languages. So the thinking and the analysis, the deconstruction that he did was very useful. It was very fruitful, a lot of ideas came out of it. This idea of truly separating out the syntax and the semantics. What this idea of denotational as a term for describing the ability to construct expressions out of sub-expressions, the importance of that for making code readable and more natural to write, all that stuff comes right out of this paper. And of course even the syntax that he uses has lived on. If you look at something like Haskell, current day, big programming language, the let and where from Haskell comes right out of that. And they'll tell you that, you know, that's where it comes from. Haskell does depart, you know, they don't use parentheses the same way to represent function calls, they just use a space. But you know, this was very influential on that. I also like the idea of defining some kind of abstract machine. This potentially influenced small talk in the way that they define the sort of the base unit of the message pass is defined in the machine. And all other things are defined in terms of that, you know, that comes right out of this paper. I'm sure other people were doing it, but, you know, he's the one who put it into this system of coming up with the different layers and what they should be. So I really like this paper, and I hope you do too. So my name is Eric Normand, I'd like to read these papers to get an idea of the history. We don't talk enough about the history of programming, computer science. We don't understand our history. Most of us, myself included, when I went to university, I didn't learn all this stuff. And it wasn't until after I got my masters, really, that I started really going into this stuff. I didn't feel like I deserved my masters after I read all this. Like, all this is there and how do I have a master's degree? Never even heard of these people. And these papers are there, they're all online, they're all accessible. So yeah, I figure most people aren't going to read them, but they might listen to a podcast. So here I am reading them. I also talk about my own opinions on things in different episodes. Mostly functional programming stuff, software design, things like that. So if you want to subscribe, go to listcast.com/podcast, you'll find links to subscribe in whatever platform you need to. Please also tell your friends, share this episode, and check out my other episodes. All right. My name is Eric Normand. This has been my thought on functional programming. Thank you for listening and I hope you reach out to me and see you next time. Bye.