The Synthesis of Algorithmic Systems
This is an episode of The Eric Normand Podcast.
Subscribe: RSSApple PodcastsGoogle PlayOvercast
In this episode, I read excerpts from Alan Perlis's Turing Award Lecture called 'The Synthesis of Algorithmic Systems'.
Short biography - Turing Award Lecture
Transcript
Programmers should never be satisfied with languages, which permit them to program everything, but to program nothing of interest easily. Our progress then is measured by the balance we achieve between efficiency and generality. As the nature of our involvement with computation changes, and it does, the appropriate description of language changes, our emphasis shifts. Computer science is a restless infant, and its progress depends as much on shifts in point of view as on the orderly development of our current concepts. Hello, my name is Eric Normand, and today I will be reading the ACM Touring Lecture, the first ACM Touring Lecture. This was the lecture given by Alan Perles, the first Touring Laureate, and I do these readings of interesting papers, and someone suggested that I go through the Touring Award winners and read their lectures, and I think that that's actually a really great idea. He said that in general they did a pretty good job of selecting winners, and that that would be a good way of surveying the field. I didn't know much about Alan Perles, I mean I knew of him. He was actually very early in programming. You should read his short biography, there's a biography published by the Touring Award Committee, and you can just search for it online, I'll link to it in the notes. Just a few facts, just to put him in history. He was born in 1922, he was in World War II, where he was working on numerical analysis, he did a lot of stuff in ballistics and very early computational things, and eventually, because he was so early in this field, he became the editor-in-chief of the ACM publications and also president for a time. But one of his biggest achievements, you could say, contributions was to be instrumental in the development of Algol, and Algol 60, so I'll read this paragraph from his biography. Arguably one of the most influential programming languages in history, Algol 60 had a very complex and controversial history in its early years. Nevertheless Algol 60 brought about a qualitative leap forward in the understanding and acceptance of programming as a legitimate object of study, not just a practical necessity for getting a computer to work. In this regard, Perl has played an important role in turning Algol into a model for programming research, for example by working with his colleagues at Carnegie on several extensions of the language and by publicly arguing for the centrality of programming languages and algorithms as a defining concept in computer science. Wow, so that's quite a statement there. Algol was developed because researchers wanted a publication language that was machine-independent, so instead of publishing, let's say, in your paper, you just put some assembly language or some other language that was only available on a certain type of computer. You wanted to publish it for a wider audience so that people could run it on different kinds of machines and also to be clear to read in a publication, a printed paper. So Algol was very important for that, and that's kind of a predecessor to even stuff like Haskell, which was developed for similar reasons. Functional programmers wanted a common syntax to be able to do their research in and like a research platform, you know, let's play with the syntax, let's add some extensions, let's see what we can do with this. So he received the Turing Award in 1966, he gave a lecture, and he was the first person to be awarded this. At the time, he was at the Carnegie Institute of Technology, which became Carnegie Mellon in Pittsburgh, and it's a very hard paper to read. I want to admit that right up front, I've read it three or four times now, and there's just a lot, he uses a lot of metaphor, a lot of turns of phrase, there's some humor in there that I don't quite get, this is from 70 years ago, it's a long time, and just the way that people talked has changed, and what was considered good humor has changed. So this is, it was very tough to actually figure out what he was talking about, and a lot of the terms in computer programming have changed. Nevertheless, I think that it is a good paper, it has a good idea in it, and certainly from someone with his pedigree, his respect and honor, I think that it's worth taking the time to read it. It's not one of these papers where every sentence has been written and revised and rewritten to be like perfectly clear technical language. Those are the kinds of papers I like the best, and I will reread those over and over and really try to understand what each sentence is saying. This one seems much more like rambling, and there's a lot of ideas going around, and it hasn't quite baked enough to be concisely expressed, and so that just presents a different challenge to the reader, and we're going to take on that challenge today. So he was instrumental in alcohol, and I think alcohol, at least in the circles I've run in, has been kind of the butt of jokes. And so I had to overcome a big bias reading this, that the first half of the paper is all about how great alcohol is, and I was like, I'll read this just because it's important to an important person, and I want to hear what he had to say, but it doesn't mean I have to like it, you know? But about halfway through, there's like a big pivot, and it's like, whoa, this is pretty cool. So we'll get to that pivot, I don't want to ruin the surprise. And so I wound up liking the paper. He does say a lot of good things about alcohol. It was super important, and obviously when he gave this award, he knew that alcohol was one of the reasons, you know, his development, his role in the development of alcohol was part of the reason he won this award, so, you know, and it was an important thing. So let's get started. Again, just to pin this in time, this lecture was given in 1966, and then published in January of 1967, right? August, he's the 21st ACM National Conference, August 1966, and published in the Journal of the Association for Computing Machinery, Volume 14, Number 1, January 1967. So here we go. And again, I'm excerpting, I'm going to skip quite a lot, it's about nine pages, is this? Yeah, nine pages, and I'm skipping quite a lot, but I'm trying to pick out the things that have the most oath. On what does and will the fame of touring rest? That he proved a theorem showing that for a general computing device, later dubbed a touring machine, there existed functions which it could not compute, I doubt it. More likely it rests on the model he invented and employed his formal mechanism. Okay, so I think that's really interesting. He's setting the tone of this lecture by referencing the person that the award is named after. You know, it's kind of a neat historical thing, like you're given, you're inaugurating this award. You got to talk about the guy that it's named after. So I think that that's interesting, I don't think people do that anymore in their lectures. They just, oh yeah, it's just a touring award. Okay, but he's saying that it's not about the halting problem, which is important, but that's not what he's going to be remembered for. That's about this machine, the touring machine that has some formal mechanism. Since the appearance of touring's model, there have, of course, been others which have concerned and benefited us in computing. I think, however, that only one has had an effect as great as touring's, the formal mechanism called algal. Algal has mobilized our thoughts and has provided us with a basis for our arguments. Okay, this is a little bit grandiose, I think, to say that algal is perhaps as important as touring's model. Certainly from this time in the future of this lecture, we look back at algal, yes, it probably had a huge effect, but who studies it anymore, whereas we do talk about touring a lot, so I don't know what to say, the fact that he was instrumental in algal, but he's also singing its praises is a little difficult to reconcile. Okay, so I'll just continue. Maybe, you know, maybe he wasn't humble, I don't know him as a person, I don't know much about him. So we'll just have to take that at face value. I have long puzzled over why algal has been such a useful model in our field. Perhaps some of the reasons are a, it's internal or international sponsorship, okay, multiple countries involved in it, that's good, be the clarity of description and print of its syntax. The natural way it combines important programmatic features of assembly and subroutine programming, D, the fact that the language is naturally decomposable, so that one may suggest and define rather extensive modifications to part of the language without destroying its impressive harmony of structure and notation. There is an appreciated substance to the phrase algal-like, which is often used in arguments about programming languages and computation. Algal appears to be a durable model and even flourishes under surgery, be it explorative, plastic, or amputative. That's really interesting, this is an interesting paragraph here, where he's talking about how there's some quality to it that persists even when you modify the thing, and when you modify the syntax, when you add to it, you can subtract something, and it still feels like algal. This is the model of syntax that we still use today, you know, we, JavaScript, C, Java, C#, all these languages derive their syntax through history, they can trace the lineage all the way back to algal, these blocks, if statements, blocks, and functions with blocks, all that stuff comes directly, indirectly from algal. There are other languages like, let's say, the LISP family that are totally separate from algal, and they just have a totally different feel to their syntax. So you know, that's kind of interesting that it did survive, right? People still call these algal-like, and you can still trace it, even though these are very different languages, they still have that same quality of syntax. Okay, and then C is the puzzling one, the fact, no, sorry, it's E, sorry, the print is pretty bad here, okay. The fact that it is tantalizingly inappropriate for many tasks, we wish to program, end of sentence. So that is one of the reasons why it is, why it has been a useful model, because it's tantalizingly inappropriate for many tasks, that's, you know, quite a bomb to drop, and then kind of not go into it. And I think that this is an attempt at humor, or he's just being playful and mysterious. I'm not sure. Maybe he's trying to say that this is kind of a topic that he's bringing up for the rest of the paper, it's very unclear, okay. Of one thing I am sure, algal does not owe its magic to its process of birth by committee. Okay, obviously a joke here, but it was developed by committee, and it's still good, which, you know, it's lucky. Thus, we should not be disappointed when eggs, similarly fertilized, hatch duller models. These latter will, while illuminating impressive improvements over algal, bring on only a yawn from our collective imaginations. These may be improvements over algal, but they are not successors as models. Okay, so he's using a, like, a lot of metaphor here, it's kind of a distraction, but it's this, the main point of this is that people have been trying to improve on algal, and while they're improving it, it's clearly an improvement. We just yawn, they're boring, they're not major, you know, they're not going to replace algal or whatever, and so he's trying in this, in the rest of this paper to say what would make a good successor, what would replace algal that would actually be interesting. Naturally, we should and do put to good use the improvements they offer to rectify the weakness of algal, and we should also ponder why they fail to stimulate our creative energies. You see, he's saying, this is what we're going to be discussing in this lecture. Why we should ask, will computer science research even computer practice work but not leap forward under their influence? I do not pretend to know the whole answer, but I am sure that an important part of their dullness comes from focusing attention on the wrong weaknesses of algal. He's talking about all these extensions to algal, these modifications to algal, new syntax for this part, instead of a case statement, we have a switch statement, we change something like that, and he's like, these are all boring. You're just playing with the syntax, and you're just changing things a little bit, and that's boring what we need is to focus on the real weaknesses of algal. That was just the introduction. Now he's got a section called, the synthesis of language and data structures. Did I read the title of this? It's called the synthesis of algorithmic systems, very general title, what do I want to say? It's a very mysterious title. It doesn't really help you understand what the paper is about, it's a very general abstract thing. The synthesis of language and data structures. The design should be performed only when the algorithms for this class impose, or, okay, the design of a programming language should be performed only when the algorithms for this class of problems impose or are likely to impose, after some cultivation, considerable traffic on computers as well as considerable composition time by programmers using existing languages. The language then must reduce the cost of a set of transactions to pay its cost of design, maintenance and improvement. Whew, okay, I had to read this many times, but this is what it's saying, that designing a programming language is expensive, and we should only undertake it if you're going to get a payoff from that design, and that payoff comes from writing a lot of code in that new language that, you know, compared to the code in the old language, or it's better code because you got a better language for that problem, okay. I don't know if I said it any better, but we shouldn't be writing, we shouldn't be designing languages unless we're going to write a lot of code in them, that's what he's saying. He talks about some of the causes of successor languages, mostly it's about errors in the language, or an omission, and you want to correct that, okay. Where might one commence in synthesizing a successor model, which will not only improve the commerce with machines, but will focus our attention on important problems within computation itself, okay, where do we start? How can we make something better than Algol, a successor, that doesn't, it's not just for programming, but also for talking about programming, the computation itself. I believe the natural starting point must be the organization and classifying of data. It is to say the least, difficult to create an algorithm without knowing the nature of its data. Since our successor is to be a general programming language, it should possess general data structures. Okay, that's interesting that he thinks that we should be focusing more on data. He talks about, well, we'll get to that, we'll get to that, right. The typical approach for designing a language has been as follows. A few primitive data structures that are, e.g. integers, reels, arrays, lists, strings, and files are defined into the language. Be on these structures a sufficient set of operations, so you got arithmetic, logical, assignment, combinational, provide those, and then see any other data structure is considered to be non-primitive and must be represented in terms of primitive ones. Then d. You can make a new set of operations for these non-primitive data structures and those you make out of procedures. This process of extension cannot be faulted. This is how we do it still. There's some, like, JavaScript has some built-in things, and then you want to build some new things, you got to make them out of those existing ones, same with Java, you know, you get some arrays and some strings and some stuff that's built in, you want to make a new thing, you make a new class, and you compose these existing things up. This process of extension cannot be faulted. Ultimately it is always required because your programming language is going to face situations where the internal set of data structures is not good for the particular problem that you're facing. However, if this process of extension is too extensively used, algorithms often fail to exhibit a clarity of structure which they really possess. And worse, they tend to execute more slowly than necessary. The former weakness arises because the language was defined the wrong way for the algorithm. So this is the clarity. The lack of clarity is because the language was defined wrong for the algorithm you're writing. While the latter, which is the executing too slowly, exists because the language forces overorganization in the data and requires administration during execution that could have been done once prior to execution of the algorithm. Yeah. Okay. So there's more work that you have to do that you shouldn't have to do all the time. If the thing was built in some way, you could have done that ahead of time. I kind of disagree with that I think that this is putting more work on the compiler. So how should I put this? Let me give an example. So you could make array access very safe, safer by having, let's say, a sufficient type system that the type system knows the length of the array that is being passed to a function. And so then you don't have to do the check at runtime. You can do it ahead of time all the way at compile time. And so the idea is that the language can handle these things. If we just build these data structures into the language, a lot of the bookkeeping that you do can be moved into the language level, into the compiler. And that is true, but you get this problem of now your compiler gets bigger and bigger just to support all these weird data structures. Anyway, I don't think I'm disagreeing with him. I'm just saying that that's what he's saying, right? Like, if you don't have it built in, you're going to have to pay the cost later. I think that all of us are aware that our languages have not had enough data types. Okay, we want more data types. Certainly in our successor model, we should not attempt to remedy this shortcoming by adding a few more. We can't just add more because we'll always have need for new data types. So our language would just balloon. Our experience with the definition of functions should have told us what to do, not to concentrate on a complete set of defined functions at the level of general use, but to provide within the language the structures and control from which the efficient definition and use of functions within programs should follow. Okay, sorry, I've been doing copy editing for my book. This was a sentence that I would definitely rewrite, someone would say this is not clear. Okay, so what is he saying though? He's saying that functions, we know that you don't just build in every possible function that you might need into your language. You make it easy to make new functions and you give the structures and the control flow and everything so that you can efficiently define these new functions. Consequently, we should focus our attention in our successor model on providing the means for defining data structures, but this is not of itself enough. The problem is you need to be able to give the context in which they occur, the evaluation rules. This is one of the things that I think Alan Kay was trying to address this problem of data structures. One of the things he said is one of his goals with small talk was to get rid of data structures because there's such a focus on following pointers and representing things and rotating trees and stuff like embedded in your algorithm for you are doing some algorithm for solving a real problem and at the same time you are manipulating this data structure and you wanted to separate those two out so that the problem was much more clearly defined. I think today we see the use of that. It's easy to find a new data structure as a new class, give it some methods, those are all the operations on that data structure, and it can be defined completely separately from the algorithm which uses it. You just have to know maybe the time complexity, the algorithm complexity of that data structure. Oh, don't call this within a for loop because it will become quadratic or something like that. He's bringing up this problem that we can't just say, "Oh, let's define new data structures. We need to know when they're being used, this analysis of what's going on has to be baked into the language, and you have to be able to extend the language to be able to talk about these different contexts." This is a statement I'm pulling out just because I think it's really interesting. We know there is no single way to represent arrays or list structures or strings or files or combinations of them. There's many ways to represent these things. You have to make a choice when you are either building them into the language, how are we going to represent strings, or you have to give the choice to the programmer, and then you have to have some kind of syntax and some way of expressing that. That's really complicated. It's really hard to do. It's hard to even make the choice. I think one of the things that we've gained over the years is because memory is cheap, and processing is pretty cheap. We've just kind of compromised on something that works well enough, and we don't have to think about it because everything's fast, fast and big. We're talking about billions, billions of times faster than what they had then. We don't have to focus so much on these little details of saving a byte here and there. This is where it starts to get really interesting, still difficult, but he starts talking about constants and variables. This is a way of thinking I hadn't encountered before. Truly, the flexibility of a language is measured by that which programmers may be permitted to vary, either in composition or in execution. What things can programmers change? That's the measure of flexibility. How much can you change? Each new experience, every time we program, focuses our attention on the need for more generality. We need a language that lets us do more. He starts talking about time sharing. Interaction with program, and that's kind of, I guess, an archaic use of program, I think we would say software, because he's using program like it's kind of like this mass-down. And also, I'll use software. Interaction with software will become increasingly flexible, and our successor must not make this difficult to achieve. The vision we have of conversational programming takes in much more than rapid turnaround time and convenient debugging aids. Our most interesting programs are never wrong and never final. I think that this is very prescient. And I'm amazed that in 1966, they knew this, or at least one person knew this for saw this. Our programs are never wrong. What he means is they have to be used, like you write some code and there's a bug in it, but you still have to use it, you know, the users are still going about their daily work. They're never final. We're always planning some new feature. We're always putting something in it, changing it somehow. We're going to have these programs, we have to interact with them, even if it's not like, you know, debugging it live or live coding it after it's deployed. But we're going to be changing it all the time. I remember there was some article in Wired Magazine, it must have been in the early 2000s, maybe the late 90s, where I can't remember who wrote it. Might have been Kevin Kelly, but he said he was talking to someone, it might have been Timo Riley. Anyway, I'm confusing it, I should have the reference, but I don't. He was saying that he was talking to someone and they were, he was like, "What do you do?" And that person was a programmer and he said, "Oh, I work on the website for this system." And at that moment, the writer realized, "Wait a second, the web, you are part of that website, the website would not work without you." This is not some software running in isolation that just does its job. There are people constantly changing things and making sure stuff is working and poking at it. This phrase from 1966, let's say this article was written in the late 90s, that's 30 years later, this is obviously true, that the web is made out of software and people. People are in there making it happen. Software is never wrong. It has to run, it does what it does. I contend that what is new is the requirement to make variable in our languages what we previously had taken as fixed. I do not refer to new data classes now, but to variables whose values are programs or parts of programs, syntax or parts of syntax and regimes of control. So he's talking about first class programs. You should be able to assign a program to a variable. Now that's one way to interpret it. He's using this term variable kind of loosely. He's also using it to say things that can vary in the program, so not just like a location in memory with a name given in your programming language. So basically he's saying we need to be able to vary parts of the program, parts of the syntax, and how things get controlled, control flow. I like how he's come to it kind of systematically. He's saying we need more flexibility and flexibility is measured in what can you vary. So let's systematically look at all the things that we're allowed to vary, so we can vary the values and memory. We can vary all this stuff, but what we need to be able to vary, and Algal did not have this feature, was vary the program itself. This is the big wow, this is getting interesting. So now he's going into this analyzing it. We generally take that which has the form of being variable and make it constant by a process of initialization. We would call this a constructor initialization. You have something, you give it all the parameters it needs, and now you have, it can change because you can always give it different parameters, but now boom, it is a fixed thing. This process of initialization is a fundamental one, and our successor must have a methodical way of treating it. So he's giving some examples, like there's things that we can't really, we can't change how they are initialized, talks about how we can define a variable, but we can't define a variable that we then say won't change, whereas when you define a procedure, and this is all in Algal, if you define a procedure, it's not possible to change the procedure later. So there's these different classes of variables, the values can change, but the procedures can't, and you should be able to do both. And he's just saying we haven't really treated this systematically. We just arbitrarily pick that procedures can't change, and variables always can change. You can't initialize the for statement, so they had some step numbers, you couldn't change the value of that, there's like little things in Algal that they hadn't allowed to vary. I don't think that's the case anymore for statements, you can use variables in those places, but you know, perhaps in Algal, you couldn't. If we look at the operations of assignment of form, evaluation of form, and initialization, as important functions to be rationally specified in a language, we might find Algal to be limited and even capricious in its available choices. We should expect this successor to be far less arbitrary and limited. He also talks about being able to, for instance, select a procedure from an array, and so using an index, you can take out a different procedure and call it, and they couldn't do that in Algal. So you couldn't define this new kind of control flow mechanism based on this array of values, these array of procedures that were computed at runtime. It is easy to see that the point I am getting at is the necessity of attaching a uniform approach to initialization and the variation of form and value attached to identifiers. So he's just saying we need to be systematic, uniform. He's going to talk about partial compilation. The fact that a partial translation, he means compilation, the fact that a partial translation of the expression is all that can be done at the classical translate time, should not deter us. It is time that we began to face the problems of partial translation in a systematic way. The natural pieces of text, which can be variable, are those identified by the syntactic units of the language. So he's talking about this branch, partial compilation, where you can specify things that are allowed to vary, like this piece of code, or let's even say this one variable, this parameter is variable, every time we run the program it could have a different value. But there's all this stuff around it that could be compiled. So we should compile that and separate it out from the thing that varies. We kind of do that these days, our compilers can leave a hole where they'll fetch from memory the value that it needs, it might be in the stack or something. Everything else is already compiled. He's talking about even being able to put a whole piece of a program in there and somehow compile around that. This turned out to be a much harder problem than is evident in this one sentence. You don't know what optimizations you will need to be able to do. He gives an example of if you had some sum you're doing and you know that it's a sum of products, and so you know a lot of the factors in that product, but the multipliers in that product, but you don't know one of them, you could pre-compute a lot of the multiplications and the sum, and then kind of factor out the unknown automatically. That's actually kind of a hard problem, and having all of this stuff pre-computed and in memory and waiting just to multiply it by this last thing that will be entered in later, that's actually kind of hard, that it's a whole research branch. Others can sometimes do it and sometimes they can't. I believe it's equivalent to the halting problem, so you can do some pattern matching and figure out some techniques, but it's not a thing that we can do in general. It's cool that he's bringing this up. This guy had the clout to be able to just open up a new field of study like that just by this. That's pretty cool. I don't know if he was the first to say it or what, but looking back it's cool. It's easy to say, execute the original text interpretively, so he's talking about how would you implement something like this. Let's interpret it, just don't compile it, interpret it. But it is through intermediate solutions lying between translation and interpretation that the satisfactory balance of costs is to be found. Here's the big reveal that really got me interested in the paper, and I read through it and then I was like, "We have to go back and read it again because I did not see this coming." It's the next section, data structure and syntax. Even though list structures and recursive control will not play a central role in our successor language, it will owe a great deal to Lisp. This was a sentence that like, "Whoa." It's funny that he brings up list structures and recursive control because I think that still to this day, that is the stereotype of Lisp that, "Oh, that's just for a language where you use recursion a lot, and you do a lot of walking down linked lists." But he's saying, "That's not what's cool about Lisp." This language induces humorous arguments among programmers, often being damned and praised for the same feature. I'll just leave that as is, no comment. Its description consciously reveals the proper components of language definition with more clarity than any language I know of. The description of Lisp includes not only its syntax, but the representation of its syntax as a data structure of the language. He's talking about code as data and not only that, but that the definition of how the language executes was written in itself. It can process that code that is data. This completeness of description is possible with other languages, so you could write an algal interpreter in algal. It is not generally thought of as part of their defining description, so it might just be a fun weekend project or something, but it's not how we would define the language. Whereas Lisp, the first paper on Lisp, did just that. An examination of algal shows that its data structures are not appropriate for representing algal texts. You would have to start defining new data structures just to represent the abstract syntax tree. I regard it as critical that our successor language achieved the balance of possessing the data structures appropriate to representing syntax and environment so that the evaluation process can be clearly stated in the language. Why is it so important to give such a description? It is the key to the systematic construction of programming systems capable of conversational computing. I don't know what to say about this, he doesn't go into this statement, justify it very much. He's saying it's not about bootstrapping, but that somehow the ability to interactively program this conversational computing requires that you be able to manipulate the language, manipulate the execution of the language, represent the syntax, the ideas of the language as first-class data values that you can process. I don't think he's talking about just having a REPL, that's not what he's saying, but it has something to do with this partial compilation that he brought up before that instead of having a huge mechanism, a subset of the section of the language syntax that lets you describe and detail the context of how this data structure is going to be used so that the compiler can organize it in memory in an efficient way and then do all this pre-compilation on it and be intelligent about how it's going to allocate and pre-compute and do all this. You could just pass that on to the programmer and say, "You know more about the context under which this data structure is going to be used and what control flow you're going to need and basically the context that we would need to compile this efficiently. We know all of that because you're working on the problem and you're learning about it and it's a specific problem whereas this is a general purpose language. Using all of that, here's the syntax tree, you compile it, programmer, you compile it, you know what you're working on. I think he's calling that this conversational, like we're constantly evolving our program, our language, our understanding and interacting with this software. We need to be able to embed that context somehow and it's just too much to come up with a syntax for that. So just give the programmer control. That's how I interpret this. Alright so this data structure is the internal or evaluation directed syntax of the language. So we would call that an AST these days and he talks about what needs to go in the AST. The choice of evaluation rules depends in a critical way on the binding time of the variables of the language. So you're going to need the AST to be able to represent what is variable and what is constant. You know this is an expression that's like a complex arithmetic expression whereas this is just referencing a variable and this is a constant. The internal data structure reflects the variability of the text-pain process, yeah I just said that. So I think that's a cool idea that Algol was a fixed language and he's saying it needs to be a little bit, to get a quantum leap we're going to need to make it so that it can basically represent more than it can represent. We should expect our successor to have the kinds of control that Algol has and more. So now this is a cool paragraph I wanted to highlight it. Parallel operation is one kind of control about which much study is being done. Another one just beginning to appear in languages is the distributed control which I will call monitoring. Process A continuously monitors process B so that when B attains a certain state A intervenes to control the future activity of the process. The control within A could be written when P then S. P is a predicate which is always within some defining scope under test. Whatever P is true the computation under surveillance is interrupted and S is executed. We wish to mechanize this construct by testing P whenever an action has been performed which could possibly make P true but not otherwise. The way I interpret this is basically he is talking about what we now call reactive programming. So when P changes some thing happens some S happens instead of say the other way around where you somehow tell it to do S because you just changed P. That's cool I just wanted to bring that up that back in 1966 he was inventing reactive programming. So this is the end of the section called variation of the syntax. A natural technique to employ in the use of definitions is to start with the language X so he is talking about varying the syntax here. Start with the language X consider the definitions that you write some definitions. Consider the definitions as enlarging the syntax to that of a language X prime and give the evaluation rules as a reduction process which reduces any text in X prime to an equivalent one in X. It should be remarked that the variation of the syntax requires a representation of the syntax preferably as a data structure of X itself. So he is defining this thing that is actually used a lot these days but in two different contexts. One is macros so you have some language like Lisp where it has got some fixed things that come with the language and then you define some new syntax rule that so then you have X prime and then you define a function which converts X prime expressions into X expressions and you have made a little tiny language extension for yourself. But we also do this now I mean all over the place in what are called transpiled languages so you have something like let's call it TypeScript or all the ES6 and beyond extensions that compile down to regular vanilla JavaScript you know ES5 I guess that's what they call it. So you define this superset of the language and you define how to translate that superset down into the actual set the original set. So we do this all the time nowadays and it seems like he was just remarking on it like we do this too it's just a natural technique. So he was saying that this is a way that we can extend syntax. Alright so in the conclusion programmers should never be satisfied with languages which permit them to program everything but to program nothing of interest easily. Our progress then is measured by the balance we achieve between efficiency and generality. As the nature of our involvement with computation changes and it does the appropriate description of language changes are emphasis shifts. Computer science is a restless infant and its progress depends as much on shifts in point of view as on the orderly development of our current concepts. So I read this in the introduction of this podcast. I mean he's saying that universality is not enough you can't just have a general programming language that doesn't make anything easy or that doesn't make the stuff you need to do easy we need to define to design these things and things change we're at the beginning this is a restless infant and we need to continuously change our point of view which is easier said than done. I have to say that this paper did change my point of view. I think that there are a couple things to bring up that relate to what he's talking about I think partial application or partial compilation let's call it partial compilation has turned out to not be a limit if you can so what do I mean by that if you have a function in an eagerly evaluated language it's got it's arguments that vary every time you run it you can pass the different arguments but the code in it we don't care how efficiently it can the compiler can figure out what can be pre computed and what can't it's not so concerning to us anymore so like if you have some argument x and it you know somehow the compiler can know that like oh x is going to be like if if x were some like big expression like we could somehow optimize it in certain ways and get different we don't care so much about that we just expect that it's going to run the whole code is going to compile it just in a straightforward way and then run the whole code every time and that's not to say that the jit doesn't figure stuff out and optimize it and stuff but we the jit isn't doing that static analysis right it's not like algebraically manipulating the the syntax and saying oh well if it varies in this way we can make this optimization like it's it's not doing that and so what is really more more interesting I think is that having first class functions gives you like 90 percent of what he's talking about here you don't need to vary the syntax that much you know closure is kind of a testament to that that if you if you use if you know if you have a big enough memory and a fast enough computer like you're not doing all these compiler optimizations yourself as a programmer you're just passing in the first class function and you know it would be nice in a language like JavaScript if you could define some macros occasionally but you can get by just by having a quick easy syntax for first class functions because the first class function can defer a computation and so you don't need to like have a a syntax for every construct you can just make a first class function you can make a higher order function and and this is something that that small talk new as well so like you don't need an if statement you can just make a method that calls the you know if it if the if the value you're calling the method on is false then it calls the else block and as long as you and if it otherwise it calls the the the then block right and the block was so easy to find you just like did like a couple of pipes and it made a block and you could define arguments in there if you needed them with if the syntax is easy enough to make these deferred computations also known as closures then you are you don't need to have modification of syntax so much so I just wanted to bring that up you know is think in a talk Jerry Sussman talked about how like once you have lambda calculus like it just gives you so much freedom of expression that you kind of stop caring about programming language design in a certain way you know like you just you have all the tools you need and I found that to be true so like I've done some work in PHP you know in recent years because I I run a website on WordPress and occasionally you have to go in and code something and you know the language has warts like any language but you have first class functions and you can just kind of keep work around problems because first class functions are such nice they're such a nice medium to work in and I don't you know Algol did not have first class functions and so you know maybe Alan Perles if he were alive today would say yeah that's all we needed was first class functions in Algol and we would have been fine you know I'm I'm putting words in his mouth but I that's the kind of thing I think about when I read this paper like did it have to be so complicated course I like macros I think it's cool that you can modify the syntax dynamically like that but you know people are doing it in JavaScript they have first class functions and and now they have a syntax JavaScript that is has a syntax that makes it really concise to define a new function and the you know the the arrow syntax and it just kind of makes this problem you know it solves it maybe halfway more than halfway and well that's just my reflection on this I do think that the idea that a language is defined in itself does lend it clarity it it the fact so bootstrapping is just is cool you know it's like a neat thing but a bootstrapped language has in its definition a kind of proof of its own generality in the same way that and universality like hey look we just defined a language in itself so it can it is touring complete it can define languages in itself so it can run any language right so this that second part of touring universality that not only can you is a touring complete meaning it can do anything any other touring machine can do which is you kind of easily proved if you have conditionals and go to's right but it also has this other thing which is and it can it can run any other touring machine you can write an interpreter for another language and run programs in that language inside of the hosted language and lisp has that built in because it was defined to define a language right and I don't know there's something there's something there that I think transcends the simple like the pragmatic maybe like sure that's that's neat that Lisp can do that but is it so practical because every day are you really defining new things and maybe maybe you would be if you could you know that's that's kind of the you know that's sort of like what racket is all about like oh let's just for any problem let's define a new language first that will make that problem easy to specify and solve okay those are my thoughts on this paper I hope to go through in order all of the touring award lectures I've covered a couple already I think but I think that it's a it's a good algorithm for me to make sure I always have content to work on to go through but also to to get a sweep of history and like what are the important ideas in computer science so if you'd like to follow along please subscribe please tell your friends I'd love to hear your questions and comments just reach out to me on Twitter or on email eric@lispcast.com and I think that's it for today my name is Eric Normand and this has been my thoughts on this touring lecture and as always rock on.