The Early History of Smalltalk

We read one of the great articles by Alan Kay, inventor of Smalltalk, called The Early History of Smalltalk.

The Early History of Smalltalk raw

[00:00:00] Eric Normand:

Smalltalk's design---and existence---is due to the insight that everything we can describe can be represented by the recursive composition of a single kind of behavioral building block that hides its combination of state and process inside itself and can be dealt with only through the exchange of messages. Philosophically Smalltalk's objects have much in common with the monads of Leibniz and the notions of 20th century physics and biology. Its way of making objects is quite Platonic in that some of them act as idealizations of concepts---Ideas---from which manifestations can be created. That the Ideas are themselves manifestations (of the Idea-Idea) and that the Idea-Idea is a-kind-of Manifestation-Idea---which is a-kind-of itself, so that the system is completely [00:01:00] self-describing---would have been appreciated by Plato as an extremely practical joke.

Hello, my name is Eric Normand. You're listening to Thoughts on Functional Programming. And today we are reading the paper or article, The Early History of Smalltalk by Alan Kay. Alan Kay is a computer scientist. He has won the Turing award and in this paper from 1993, he is trying to explain, through a narrative, how Smalltalk came to be.

What were the influences? Who were the people involved? Who deserves credit for what and what kinds [00:02:00] of technical inventions had to happen for Smalltalk to come about. That's a big task. It's a big article. The way I've printed it, it's about 54 pages. I'm not going to read the whole thing, obviously, but it talks about a lot of stuff and so I feel like it needs a focusing question. It doesn't answer just one question. It's sort of like, what are all the influences to credit? It's got everything. What happened? How did Smalltalk come to be? But what I want to ask is how has one person (Alan Kay) had such influence on computing?

You know, Alan Kay is credited with inventing object-oriented programming, which [00:03:00] is arguably or indisputably the dominant paradigm that is used in industry today and taught in schools. He's also credited with inventing, or at least being very central to the invention of, what we would call the windows, icons, mouse and pointer. No, sorry. Windows icons, menu pointer (WIMP). The WIMP interface that we all use today. The sort of direct manipulation, moving a mouse around, clicking on things, WYSIWYG editing, all of that is from the lab that he had at Xerox PARC. And [00:04:00] so he, created the trillions of dollars of value in our industry.

Certainly, he didn't make that much money. But how is that possible? What was going on at the time and what kind of person does it take to do that, to have that kind of influence? So we're going to go through this article. I'm going to read sections and talk about them.

Certainly, this is not a dense technical article. It does have some technical parts, but a lot of it is narrative and a lot of it is expounding. He's expounding on the ideas that he had and why they were important. It's really worth a read. It's definitely that kind of article. [00:05:00] But I'm only going to focus on this one question, because otherwise it would be too much.

You should read this article, by the way. It's really good, but I can't read the whole thing on this podcast. This paragraph that I read at the beginning really shows a lot of what's going on in Alan Kay's life. He brings in Leibniz and Plato. So Leibniz, mathematician, Plato, philosopher in the same paragraph. He's drawing from these different threads. He's talking about 20th century physics and biology. He's describing in a succinct way how object-oriented programming works as messages and a combination of state and process hiding. This idea of self reference. It really sums up a [00:06:00] lot of what the paper is about.

So I'm going to continue with the next paragraph from there.

In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing "computer stuff" into things each less strong than the whole---like data structures, procedures, and functions, which are the usual paraphernalia of programming languages---each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computers all hooked together by a very fast network. Questions of complete concrete representation can thus be postponed almost indefinitely because we are mainly concerned that the computers behave appropriately, and are interested in particular strategies only if [00:07:00] the results are off or come back too slowly.

I do want to read the next paragraph, but let me break this down a little bit. We have to remember that these ideas came up before object-oriented programming existed, right? In 93, the boom in object-oriented programming and programming languages was already happening. People were starting to use it and understand it, and it was proliferating through the industry. He doesn't talk about it in here, but I believe that Alan Kay would say in a lesser way than Smalltalk [was better than what had entered industry]. He talks says "I invented objects oriented programming, and I did not mean Java or C++."

[00:08:00] So in 1993, people had already started using C++. Java was just around the corner. There were some other languages doing object-oriented programming. And he's saying they kind of missed something. He doesn't talk about that much here, but I'm bringing that in. The idea is that we take and simulate virtual computers. That each object is a virtual computer. So it has all the power, the Turing completeness of a computer. But now they have thousands of them and each one is doing a small task. That's the idea behind object-oriented programming. Today, we might explain it differently. We might say that, it's data and [00:09:00] behavior fused together and hiding the implementation.

But we don't often think of it as computers communicating over a fast network. That's not our metaphor. And I think we've lost something because we don't have that metaphor now. I really like this idea now. So the people who do this pretty well are the Erlang folks.

Erlang has this idea of a process, which is totally a Turing-complete process. It has a state and it passes messages to other processes. They do think much more in terms of "these are processes communicating with each other, and even the process could migrate to another computer." You do have this [00:10:00] idea of the network being there.

He says questions of come concrete representation can thus be postpone almost indefinitely. What he's saying is if you've got the right interface boundaries, [you can postpone making the hard decisions of implementation.] Let's say I'm a computer. I'm talking to this other computer. (They're really what we would call objects.) I don't have to care how it's represented in that other object. I can even just stub it out or make a very naive implementation and postpone it almost indefinitely. The only time that I might be interested in it is if the results are wrong. "That's not the right answer. We need a better algorithm implemented in that other object," or it comes back too [00:11:00] slowly. We need to optimize the other algorithm. But notice there's this idea of being able to encapsulate those changes in an object and not be so concerned with concrete representation.

Now this was written in 93 but a lot of his work was done in the sixties and seventies, on Smalltalk, (and we'll come back to this), but a lot of focus [in programming in general] at that time was on data structures and algorithms. How do you represent a tree in memory with pointers and how do you walk that tree in an efficient algorithm?

These are questions that he wanted to not have to answer, at least not immediately. He wanted to get the work done, [00:12:00] and just push that off into the indefinite future.

I'm gonna read the next paragraph.

Though it has noble ancestors indeed, [giving credit to other people here] Smalltalk's contribution is a new design paradigm, which I called object-oriented for attacking large problems of the professional programmer and making small ones possible for the novice user. Object-oriented design is a successful attempt to qualitatively improve the efficiency of modeling the evermore complex dynamic systems and user relationships made possible by the silicon explosion.

What we would say today is something like objects oriented programming scales better than the imperative, procedural programming. I think that's what he's saying here. He's saying it's successful. [00:13:00] That it qualitatively improves the efficiency of modeling the evermore complex dynamics.

So we're making bigger systems and, and object oriented programming gives you some better way of modeling. Not saying it's the perfect answer. He definitely doesn't say that. In fact, by the end, he's saying he wishes someone would do better. And that he's surprised that people haven't done so in the time since he invented it.

Though OOP came from many motivations, two were central. The large scale one was to find a better module scheme for complex systems involving hiding of details, and the small scale one was to find a more flexible version of assignment, and then try to eliminate it altogether. As with most new ideas, it [00:14:00] originally happened in isolated fits and starts.

He's talking about two ideas here. Better module scheme to hide details. This was the big one. He thought that this was important. You needed to be able to hide the details. This is very important for scaling because you can change the details without changing the whole system.

And that module scheme that he's talking about is the object with message passing, and the methods form the module boundary. And then the small scale was finding a more flexible version of assignment and then try to eliminate it altogether. This is one of those mysteries that you have to really dig [00:15:00] into the depths of Alan Kay. This is the exact reason why I like this format for the podcast because this is not something that you would be able to understand if you weren't intimately familiar with this stuff.

I've read this paper many times. I've watched a lot of his talks. I've thought a lot about it, had many discussions about what's going on here, and I feel like I've got a pretty good idea of what this means. He talks about this later, so I'm not going to go into too much detail, but with this more flexible version of assignment and eliminate it altogether, he's trying to get rid of state-based programming. He's trying to get rid of using variables to store [00:16:00] state or using places in a data structure to store state. He feels like there's something brittle and coupling about an algorithm of a piece of code that assigns its state to some variable. It should have some more flexible form and ideally, it should disappear. It should have no form at all. Now obviously there is state in object oriented programming. There's data in it, but it's hidden and it's hidden behind this method boundary. And one thing you could do, which a lot of people do, is to make getters and setters on that state.

But he thinks that's just wrapping. You're just taking the [00:17:00] assignment statement and giving it a different form. Instead of some assignment operator, now it's just a method that does the same thing, right? He wants to eliminate it altogether and have something else, something more goal-based, higher level. Not just store this for me, but higher-level thinking, depending on what tasks you're doing.

It should be something like if it's a bank account, transact this money. And the bank manages its state. Well, we'll get more into that.

Now he's going into the philosophical philosophy, the history of ideas. Very important when reading Kay that you know you're going to get some philosophy. You have to be prepared to go on a journey of the history of ideas. He's kind of [00:18:00] placing himself in this Epic in a somewhat modest way. But still he's, he's definitely putting himself in history here.

New ideas go through stages of acceptance, both from within and without. From within, the sequence moves from "barely seeing" a pattern several times, then noting it but not perceiving its "cosmic" significance, then using it operationally in several areas, then comes a "grand rotation" in which the pattern becomes the center of a new way of thinking, and finally, it turns into the same kind of inflexible religion that it originally broke away from. From without, as Schopenhauer noted, the new idea is first denounced as the work of the insane, in a few years, it is considered obvious and mundane, and finally, the original denouncers [00:19:00] will claim to have invented it.

He's talking about how object oriented programming was not seen by people, then he saw it but didn't really get why it was so important, and then he uses those ideas and starts building stuff, and then something happens. The pattern becomes the center of a new way of thinking, and then it turns into religion, which is kind of what we have today in object oriented programming (and other paradigms too. I'm not picking on object order programming.)

I'm just saying that that's what happens with ideas, with all ideas. [00:20:00] He's talking about one influence. I'm not going to go through every single influence, but here's one very early influence. He was working in the air force. He was a programmer.

There was no standard operating system or file formats back then. So some designer decided to finesse the problem by taking each file and dividing it into three parts. The third part was all of the actual data records of arbitrary size and format. The second part contains the B220 procedures that knew how to get at records and fields to copy and update the third part. And the first part was an array of relative pointers into entry points of the procedures in the second part (the initial pointers were in a standard order representing standard meanings). Needless to say, this was a great idea and was used in many subsequent systems until the enforced use of [00:21:00] COBOL drove it out of existence.

There we get a little bit of a cynicism. Sometimes I get depressed when I read these papers about Alan Kay's stuff. I watch his talks. It's kind of depressing because he talks about the future that could have been, and it's sad. But he's talking about how they had to read these tapes of data and all the data would be put in the larger third part. And then the second part had procedures for accessing those parts, the pieces of data inside, like maybe the record structure or whatever. And then the first part had pointers into the procedures. It's this other next step of indirection. And so you could read in this small table that had [00:22:00] the standard operations, like reading and writing, and maybe there was a search feature and all the standard stuff was up at the front in a specific order.

Someone would ask you, "can you retrieve the record for this ensign, here's his ID number. Please retrieve his record." So you would know that the third pointer was "retrieve record by ID", and so that pointer would point to a place on the second part that had the full procedure so you could jump to that, run the procedure with the ID as input, and it would go fetch. That procedure would run and it would read the rest of the tape. The reason he brings it up is [00:23:00] it was a cool idea. It's like a v-table and this is how objects are implemented nowadays. So you have a table of pointers to methods, and that's your basically your class and it looks it up. And then the third part is like the object itself, which contains all the state. It's pretty cool that this was done so early and on a tape systems.

Another influence, he's talking about Sketchpad.

*The three big ideas that were easiest to grapple with were: it was the invention of modern interactive computer graphics [Sketchpad was]; things were described by making a "master drawing" that could produce "instance drawings" *[So now he sees the idea of class and instance]; control and dynamics were supplied by constraints, [00:24:00] also in graphical form that could be applied to the masters to shape an interrelated parts.

So this idea of constraints on it, he talks about it later. So we'll, we'll bring that up again later. Cause, you know, in object oriented programming, we don't think of programming with constraints.

By luck, his professor handed him the Sketchpad paper, the thesis that defined Sketchpad. He read it. This is when he's in grad school.

Then his professor says, here's a code listing, figure out what it does. And it happened to be Simula. He's reading the code and trying to figure it out. [00:25:00]

What Simula was allocating were structures very much like the instances of Sketchpad. . . What sketchpad called masters and instances, Simula called activities and processes. Moreover, Simula was a procedural language for controlling Sketchpad-like objects, thus having considerably more flexibility than constraints (though at some cost in elegance).

So notice it's using a procedural language instead of constraints. Constraints are nice, especially for purely two-dimensional representations of drawings.

But he's saying, wait, this has a procedural language, which brings back in the Turing completeness. So now you can have a Turing complete language for controlling the objects. This is where he has this big epiphany that all those things that I just talked [00:26:00] about came together.

My math major had centered on abstract algebras with their few operations generally applying to many structures. My biology major had focused on both cell metabolism and larger scale morphogenesis with its notions of simple mechanisms controlling complex processes and one kind of building block able to differentiate into all needed building blocks.

This deserves some deconstruction. So he had a math major. There was no computer science major back then. There weren't enough computers. It wasn't a big enough field yet. But it centered on abstract algebra with their few operations generally applying to many structures. So notice, we might call this today polymorphism where you have different data types, but they can use the same [00:27:00] operations.

You could say that numbers use addition. That operation of addition applies to numbers, but it also applies to matrices, right? Then you get into abstract algebra. You have groups and fields and things like that.

This idea of things acting in similar ways, similar enough that the same operations can apply to them, that they are technically different operations, but conceptually they share enough in common that you use the same name. Because addition on a matrix is not the same as addition with numbers. We know that we would implement them totally different ways, but there's something about it that makes us say, yeah, [00:28:00] that's addition.

And that's what abstract algebra studies. Now I do have to bring this up because another Turing award winner Perlis, who Alan Kay does eventually quote in here, he has a sort of the opposite view. This is like reflecting an opposite view from Perlis. Perlis says that it's better to have a hundred operations that operate on one data structure, then 10 operations that operate on 10 data structures.

The power of a language like Lisp, for instance, is that you have just one basic data structure, which is the cons cell, and then you have all of these operations that work on it and you get this multiplication---exponential growth of possible [00:29:00] combinations of those operations as opposed to if you split it up and you had a bunch of different data structures with a few operations each, you don't get that exponential growth in combinations because they're all separated out. They can't influence each other as much. So I just think that that's really interesting that he's taking it from a different place, right? This abstract algebra really does the opposite.

It says, let's define plus for every possible value we can. What would it mean to define plus on this and et cetera? Which brings it to polymorphism, but also I feel like it requires a level of skill. Of creating algebras that most [00:30:00] people don't bring to the table. Let me put it that way. Like is what he's saying, that in order to really get the power, you need to have some understanding of abstract algebra. I'm kind of worried there because I've seen that where people are defining a new class, they're making an API and they're not even that great at defining API.

I'm not that good either. Right? I know how hard it is to write APIs. Books are written on this, and every time you write a class, you have to write a new API.

I'm worried. But then there's this other idea of one class can create millions of instances, right? So some, [00:31:00] simple mechanisms controlling complex processes. So the idea that you could just make this little thing and it'll grow and there's morphogensis.

Like the simple idea of like a gradient of chemicals, of some hormone or something, can create this intricate organism, and then the cells can learn to differentiate. That's really interesting. And it's not something we talk about when we talk about object oriented modeling.

It's interesting that that's what's behind it. So he's talking about all these influences. The one I didn't talk about it was Bob Barton's computer that he designed. There's a whole section on it where he talks about its influence, but I thought this was a good summary.

Bob Barton, the main designer of the [00:32:00] B5000 and a professor at Utah, [That's where Alan Kay was at, Utah] had said in one of his talks a few days earlier: "The basic principle of recursive design is to make the parts have the same power as the whole." For the first time, I thought of the whole as the entire computer and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divid it up into little computers? Why not thousands of them, ach simulating a useful structure?

This is just really lucky, or was it the time that you could have someone who was an influential computer designer who taught at your school and a few days earlier he had given a talk which [00:33:00] completely blew your mind about the nature of computing. I look back on my university experience at school and I don't remember any of these kinds of mind-blowing powerful statements. Maybe they happened and I wasn't ready for them. I mean, that's entirely possible. And then they just went over my head. Butm at the time I was in school, it seemed much more like a trade school.

Like you're learning this because you're going to go into the industry. And so we're going to try to teach you some practical stuff. And it's just amazing that you could have this. And I wonder if it happens these days. I don't know. Maybe it does, but I think that also, this is considered like a golden era, right? This is what, 1966 or something. It's something that we don't have [00:34:00] today, most likely. And I wish we did.

Then there again, there's this idea of dividing up into weaker parts, which I think it's really interesting. You know, it goes into this idea of monism, right? The idea that we need to find one unifying principle for the whole system to work on. And I find that very satisfying to think that there might be one principle that we could do like in the Lambda calculus, everything's a function. And object-oriented programming, everything's an object, including the classes for describing the objects, including the methods. And even the pieces of code are objects, right?

Like that's very satisfying. It's like objects all the way down infinitely. It's recursive. But [00:35:00] over time I've thought, well, you know, do people a favor and differentiate a little bit. You know, what if there are two things and those two things interacting together also have a lot of power. You don't have this undifferentiated oatmeal of little objects passing messages to each other. What if there are two things? What if there are three things? So I feel like it's one of these philosophical pursuits that is very fruitful for sure, but maybe in the end is not where you want to end up.

It's really great to think that way, but no one wants to program in pure Lambda calculus. They want numbers that are represented in [00:36:00] binary, not as a Church encoding. They don't want it. It's very difficult to work with. Likewise, I think that there might be something to having some things be objects and some things just be data.

And that's kind of what Erlang does, right? It has the processes, but a message is just a piece of data. It is not an object. It's not a process. The mailbox is not a process. It's part of the, of, of the process. Um. That just, just a thought there. I also think that there's something conflicting here in the ideas that he talks about.

Um, one kind of building block, able to differentiate into all needed building blocks. Yet at the same time, he wants this, uh, recursive, you know, everything has the same power as the [00:37:00] whole. And I think it's a very brilliant idea, but I think it conflicts a little bit, you know, a cell once it's differentiated, it is not, you know, like a, you know, a skin cell is not going to turn into a liver cell, right.

It just, they, they've differentiated and now they are not recursive anymore. Like that skin cell can't make a whole new person. Right. The DNA is in there. I know. But like it's its own self. It has already decided it can't do that anymore. So I don't know. I think that there's some, some little bit of conflict there, but it's, it's, it's still a, you know, a very fruitful line of thought.

Okay. So he's building this machine called the flex machine, and. Um, he's talking about how he's trying to design the [00:38:00] language for the flex machine.

And there is a language called Joss and then one called Euler which was a generalization of ALGOL in which types were discarded, different features, consolidated procedures were made into first-class objects and so forth. Actually kind of Lisp-like, but without the deeper insights of Lisp.

And he's going to get to to the deeper insights of Lisp very shortly. So he, um, initially adopted a bottom up Floyd Evans parser. I don't know what a Floyd Evans parser is, and later went to various top-down schemes, several of them related to Shoare's, Meta II that eventually put the translator in the namespace of the language.

Basically, he's got the compiler available at runtime, which is huge, and, uh, a very important feature of [00:39:00] Smalltalk.

Okay. Again, um, we're, we're going to skip a little bit. Um, cause now he's looking at stuff like a Moore's law and. This idea of, of what does the, what does the computer have to do for the user. Okay.

Even today, in 1992 it is estimated that there are only 4,000 IBM mainframes in the entire world, and at most, a few thousand users trained for each application.

There would be millions of personal machines and users. Where would the applications and training come from? Why should we expect an applications programmer to anticipate the specific needs of a particular one of the millions of potential users? An extensional system seem to be called for in which the end users would do most of the [00:40:00] tailoring and even some of the direct construction of their tools.

So this is really important that. I, I've, I feel like I, whenever I use computers today, this is my biggest regret with how things turned out with where we are today, is that you need to buy an app. You need to train yourself in that app, and then you're stuck in the app. We need to be able to do more tailoring.

We need to be able to even construct new tools. So, you know, just as an example, if I buy Photoshop, it gives me all these cool filters, all this, all this tools, right? But I can't use those filters anywhere else. I bought them, right? I bought the software. Why can't I use that algorithm in my own program? [00:41:00] I feel like that is one of the, one of the things that I hate about computer programming these days.

Okay. So next he's talking about, you know, the languages designing and one of the issues, uh, the used L values and R values. So that's L values are the left hand of the assignment and the R values the right hand of the assignment, which worked for some cases but couldn't handle more complex objects.

For example, a 55. So assigning to the 55th index of a assign it zero. If it was a sparse array whose default element was zero, would still generate an element in the array because the assignment is an operator and a 55 is D referenced into an L value before anyone gets to see that the R value is the default element, regardless of whether a is an array or a procedure [00:42:00] fronting for an array.

What is needed is something like the function a that takes three arguments, 55 equals and zero which can look at all relevant operands before any store is made. In other words, equals is not an operator, but a kind of index that can select a behavior from a complex object. It took me a remarkably long time to see this.

Partly I think because one has to invert the traditional notion of operators and functions, it's cetera. To see that objects need to privately own all of their behaviors, that objects are a kind of mapping whose values are its behaviors. So we see that he's, he's struggling with this traditional idea of L values and our values and coming up with a better idea that the assignment.

Which is typically done as an operator, [00:43:00] uh, and built into the language. It needs to be something redefined, bubble and redefine a specifically for that particular type of object that you have. So if it's an array, it does this. If it's an. Object that simulates an array, then it should do something else. So that's what we're used to that in object Vernon programming.

But this is where he's coming up with this idea. Okay. I'm going kind of long, so I'm going to have to speed up a bit. Okay. Next. He's, um, talking about. Um, uh, I think, I think he saw, again, golden age, right? I think that he's lucky. It's very important that he's lucky, but he's also the right person to be there at the center of this.

The big whammy for me came during a tour of the university of Illinois where I saw a one inch square lump of [00:44:00] glass and neon gas in which individual spots would light up on command. It was the first flat panel display. I spent the rest of the conference calculating just when the Silicon of the flex machine could be put on the back of the display.

Flex machine was the machine he was designing.

According to Gordon Moore's law, the answer seemed to be sometime in the late seventies early eighties.

Okay, so this is just showing his brilliance right. Yes, right place, right time, but he saw a one inch square flat panel display. One is square, and he basically figured out that by the seventies or eighties that he could divide the, the, he could invent a tablet that the Silicon of his machine would be small enough to fit on the back.

It's crazy. That that the, the one inch display would [00:45:00] grow and his silicone would shrink and it would be able to be one thing. I mean, that is nuts. That is nuts. I couldn't do that. I see stuff all the time. That probably will be a major invention in 10 years. But he, he did it. He saw this. Yeah. Okay.

All right.

Okay. Early next year there was a conference.

Yeah. Wait.

Right. Okay. He's talking about grail. I should mention this. Grail is this system for drawing spreadsheets with a, with a pen. With a, like an input [00:46:00] tablet. Okay.

In order to capture human gestures, uh, gave Groner wrote a program to efficiently. Uh, recognize and respond to them though everything was fastened with bubblegum and the system crashed often.

I have never forgotten my first interactions with this system. It was direct manipulation. It was analogical. It was mode lists. It was beautiful.

Okay. So this is another influence on him for making it, making it tactile, making it direct manipulation. The same kind of stuff we see with the mouse. Today.

This is where it came from through him. Early next year [This was 1969] there was a conference on extensible languages in which almost every famous name in the field attended. The debate was great and weighty. It was a religious war of unimplemented poorly thought out ideas. As Alan Perlis, one of the great men in computer science, put it with characteristic [00:47:00] wit.

Now he's quoting Alan Perlis, right? So it's a quote in a quote.

It has been such a long time since I have seen so many familiar faces shouting among so many familiar ideas, discovery of something new and programming languages like any discovery has somewhat the same sequence of emotions as falling in love, a sharp elation followed by euphoria, a feeling of uniqueness, and ultimately the wandering eye.

The urge to generalize. Okay. So I just want to say that, um, here is Alan Kay, who is pretty well known for his pithy statements, like, you know, um, the best way to predict the future is to invent it. You know, stuff like that. And here he is, quoting someone who is also very pithy. So I thought that that was, that was worth mentioning.

Okay. But he's talking about, it was [00:48:00] all talk. No one had done anything yet. They were just talking about unimplemented poorly thought out ideas. Um. Right. Okay. So now he's talking about his, I had already made the first version of the flex machines syntax driven, but where the meaning of a phrase was defined in the more usual way as the kind of code that was emitted.

This separated the compiler extensor part of the users of the system from the end user in Iron's approach. So this is Ned Irons who had invented something. Uh, his language or system was called IMP. He's, he's saying it's good in irons approach. Every procedure in the system defined its own syntax in a natural and use for useful manner.

I incorporated these ideas into the second version of the flex machine and started to experiment with the idea of a direct interpreter rather than a syntax directed compiler. Somewhere in all of this. I realized that the [00:49:00] bridge to an object based system could be in terms of object of each object as a syntax directed interpreter of messages sent to it.

Huh. Very cool idea. Right. The, that you're each, each object is getting a stream of tokens and they are parsed by, you know, by the object.

The mental image was one of separate computers sending requests to other computers that had to be accepted and understood by the receivers before anything could happen.

In today's terms, every object would be a server offering services whose deployment and discretion dependent entirely on the server's notion of relationship with the servi. As Leibniz said, to get everything out of nothing. You only need to find one principle.

Yeah. So this idea of all these objects talking to each other again comes up and they're not [00:50:00] talking to each other in some kind of, um, push button API or remote procedure call.

They're talking to each other in a language that can be parsed, right? So they have a rich way of speaking to each other. Okay. So now he's talking about Lisp.

The biggest hit for me while at SAIL. [That's a Stanford AI lab] in late 69 was to really understand Lisp. Of course, every student knew about car cutter and cons, but Utah was impoverished.

And then that's where he went to school and remember, uh, in that,

no one there used Lisp and hence, no one had penetrated the mysteries of evil and apply. Those really are. More profound than car cutter. In cons Evalyn apply. I could hardly believe how beautiful and wonderful the idea of Lisp was. I say it this way because Lisp had, [00:51:00] [not only the way he's means is the idea of Lisp, so you're not saying Lisp was beautiful or wonderful.

He's saying the idea of Lisp was beautiful and wonderful. ] I say it this way because Lisp had not only been around enough to get some honest barnacles.

Okay, so it got barnacles. It, this had been like over 10 years. Um, Oh yeah. Over 10 years since it came out. Um, so it had barnacles,

but worse, there were deep flaws in its logical foundation.

Ooh, interesting.

By this, I mean that the pure language was supposed to be based on functions, but it's most important components such as Lambda expressions, quotes, and cons were not functions at all, and instead were called special forms. Landon and others had been able to get quotes and cons in terms of Lambda by tricks that were variously clever and useful, but the flaw remained in the jewel.

Remember, this is again, this [00:52:00] monism you need one. Concept one principle, and in his case, it's the message pass, right? And enlist case. There's two things. There's these special forms, and then there's functions in the practical language. Things were better. There were not just. Expressions, which are valued their arguments, but EV expressions, which did not.

My next question was, why on earth call it a functional language? Why not just base everything on ethics suppressions and force evaluation on the receiving side when needed?

Okay. I, I should, I should go into this. So what he's talking about is if you have a function in Lisp, the evaluator, if you're calling a function, it will, the evaluator will evaluate each of the arguments and then pass those to the function that gets called, right?

So the function doesn't know, um, [00:53:00] where those values came from. It's passed by value. Essentially, and then there was also this thing called F expressions, which didn't change, didn't evaluate the arguments first. This is passed by name. It passed in the entire expression, right? So if, let's say a, the variable a is four, and you called like increment the function increment on a and an expression.

It would pass 4 to the function increment. I'll give you five. Right. But in an F expression, it would pass you a, the symbol, a as a piece of code that got passed to the function, and it would be the function's job to evaluate it if it needed to. Okay. If it needed it, it could interpret it in any way.

Okay. So he [00:54:00] started this line of thought that said, take the hardest and most profound thing you need to do. Make it great, and then build every easier thing out of it. So start with the hard stuff and make it good. Make this deep idea really good. Okay. That was the promise of Lisp and the lure of Lambda needed was a better, hardest, and most profound thing. Objects should be it.

So here we have it. He, he's, he's got this great idea that objects should interpret they're messages like a parser does, like an interpreter does. And, um, not have this other mechanism. Of applying eval, sorry. The arguments before passing them to the object. The object would do that as needed, which would allow for the object to make a choice.

[00:55:00] Do I need to evaluate this? You know, you have some kind of laziness involved, or like in the case of the L value in the R value. Um, you know, my data structure is full. I don't need this value in me. You know, whatever the data structure gets to decide, the object gets to decide what to do with that, with, with the message that's passed to it.

Okay? So he's going to talk a lot about, um, children now. Uh, so he wants to create a computer for children. He meets Seymour Papert.

One of the basic insights I had gotten from Seymour was that you didn't have to do a lot to make a computer, an object for thought for children, but what you did had to be done well and be able to apply deeply.

[00:56:00] So that's a, you know, a little little idea for him that it didn't have to be like enormous. It just had to be done well and go deep.

One little incident of Lisp beauty happened when Alan Newell.

So he is a touring award winner as well. Notice, you know, his life is intertwined with all these like brilliant people.

Uh, Alan Newell, one with, uh, herb Simon, um, for their, their work on, uh, what was it called? Planner. Okay.

Alan Newell visited PARC with his theory of hierarchical thinking.

So this is this, um, protocol for a procedure for writing. Um, for, for solving a problem with code for developing an algorithm and was challenged to prove it, [to prove that this hierarchical theory], he was [00:57:00] given a programming problem to solve with the protocol while the protocol was collected.

So people are going to be like writing down the steps.

The problem was, given a list of items, produce a list consisting of all the odd indexed items, followed by all the even index items.

So break the list into, but you know all the odds and all the evens and then put it back together.

He got into quite a struggle to do the program.

Okay. So it took him a awhile and there was a bug apparently, but

in two seconds I wrote down, [this is Alan Kay,] this function in a Lisp syntax, um, which basically just says append the odds to the evens. Okay. And then a few seconds later he says, well, the odds are all the, any, you know, defines this recursive statement, and then the evens, I need to find this other recursive statement.

So in five lines of code, [00:58:00] he has a solution.

This characteristic of writing down many solutions in declarative form. So it's list bright as functional, functional form. Right. And then have them also be, the program is part of the appeal and beauty of this kind of language. Watching a famous guy, much smarter than I struggle for more than 30 minutes to not quite solve the problem his way.

[There was a bug ] made quite an impression. It brought home to me once again, that point of view is worth 80 IQ points. I wasn't smarter, but I had a much better internal thinking tool to amplify my abilities. This incident and others like it made paramount that any tool for children should have great thinking patterns and deep beauty built in.

I just want to bring this up again cause um, we're, we're much more used to these days, uh, writing stuff in a, I want to say higher language, [00:59:00] higher level language than they were at the time. Okay. Then the normal everyday programmer was, at the time, a lot of stuff was still done in a very procedural way.

Um, and. This idea of being able to write a Lispy syntax, which I mean, if you looked at it today, you would say that is not very declarative. Right? But you could imagine someone's thinking that this is declarative back in the day. You know, where are you initializing your variables and where is your for loop?

You know, you're just doing, um, uh, uh, one condition and doing some recursion and you're done. And I can see that, that, um, this, this is super powerful. I mean, it's one of the reasons [01:00:00] why Lisp has persisted. It's because this. Uh, the syntax and the idea of pure of, eh, not pure functions of first-class functions gives you this ability to write in a very declarative form.

Right. And I, I think it's, it's, it's great that he recognizes this. Um, and especially that, I mean, it's, it's interesting that, uh. You know, one thing that people talk about in functional programming, you know, I'm a, I like functional programming, but one thing they talk about is how you present. This is a story from John Hughes that there was some competition, like write this in.

Your favorite language is all specified like it was. The problem was pretty small. [01:01:00] But write this in your favorite language and the. C plus plus version was like a thousand lines of code. Um, the, uh, C version was similar. The Java version was big, but then the Haskell version was like 50 lines of code and everyone thought it was a joke.

They were not a joke. Like, Oh, you couldn't possibly do that. But like. How can it be serious if it only took you 50, 50 lines of code, and this is how it is, is because it's defined. It's like what he says. You write down the solutions in declarative form and then there are also the program. You don't take that form and then say, well, now I've got to do the real work of converting it into machine code, or whatever.

Like, that's the compiler's job. You just write it at this high level. I think that that's something that we can still appreciate, [01:02:00] um, because a lot of languages are still like, where's your variable declaration and where's your four loops? Um, and this stuff looks so terse. It's like, where's the work being done was being done?

Cause it's recursive. Anyway. Oh, I'll move on from that. Okay.

As I mentioned previously, it was annoying that the surface beauty of Lisp was marred by some of its key parts, having to be introduced as special forms rather than as its supposedly universal building block of functions. The actual beauty of Lispist came more from the promise of its metal structures than its actual model.

I spent a fair amount of time thinking about how objects could be characterized as universal computers without having to have any exceptions in the central metaphor with seem to be needed, was complete control over what was passed and a message sent in particular when and in what environment did expressions [01:03:00] get evaluated?

So here's the idea that the, he. He describes this, uh, down at the bottom of this page that the message has to contain way more than just just the code. Like it has to have the global environment of the parameter values because you don't know where this is going to be evaluated. So it has to be passed in the sender and the receiver.

Right? Who sent it? Who's receiving it? Uh, what kind of reply style do you want? Uh, that's going to have a status, the replies of the eventual results, um, the number of parameters, et cetera. And so he calls the message in generalization of a stack frame. So, you know, in, um, a traditional implementation of a, of a call stack, the functions calling each other has.

Uh, a stack [01:04:00] where you put a stack frame that represents mostly the arguments that were passed in and then where to jump back to when you're done. Right. And, uh, so this is like expanding that out and saying, well, what if we had. Procedures. Uh, we've had the, sorry, the sender and the receiver. Uh, the environment things where, you know, you just blow this up.

Like what's all the information we could put in there? And that becomes like the message. So this is very much relating function calls with message sending, right? That the function call is really like a message. And this, this reminds me a lot of scheme scheme was actually. Developed an initially too, uh, to explore, to experiment with the app during model

I know it's, it's done. It's nothing like Earl Lang. Um, [01:05:00] but what happened was they had this idea of actors talking to each other, which the actor model has a very functional recursive style, but it also has this message sent. Right? So, you know, it's, uh, send a message and then Recurse to receive the response, right?

So, um, it had this like, you know, tail call elimination recursive loop thing, and it also had this message set, but when they were deep in the code, the code for the message send looked just like the code for a function call. For a tail call. And so they said, let's just refactor it cause it's duplication.

And so they were left with just the function call. Right now this is a talk for another day, but they just to quickly [01:06:00] like put an end to it. They didn't, um, actually do the actor model because the actor model has. Uh, locality, right? The computation happens in a process that is separate from the other process, right?

So it in, you know, in, in theory, in a single threaded system, the actor model does act just like tail call optimization, right? But. Uh, and recursive functions, functions, calling each other, but in practice, having the processes be separate and a piece of data copied from one to the other is actually a better way, but it's related to this.

So I thought that that was a pretty cool story. Okay.

For an object like Lisp, it is almost certain that most of the basis of our judgment is learned and has much to do with other related areas that we think are beautiful, such as much of [01:07:00] mathematics.

So he's trying to talk about what makes Lisp beautiful.

Why does he like it so much? You know, there are universally appealing forms, but mostly that's because of our shared biology. But this is something different. This is learned.

One part of the perceived beauty of mathematics has to do with a wondrous synergy between parsimony, generality, enlightenment and finesse.

I think it's going to give an example cause I had no idea. Like, wow, that's some pretty general ideas there.

So for example, the Pythagorean theorem is expressible in a single line, [So that's parsimony] is true for all of the infinite number of right triangles, is incredibly useful in understanding many other relationships. [So that's enlightenment] and can be shown by a few simple but profound steps. [So that's finesse.]

When we turn to the various languages for specifying computations, we find many to be general [01:08:00] and a few to be parsimonious. For example, we can define universal machine languages in just a few instructions that can specify anything that can be computed.

But most of these would not be, we would not call beautiful in part, the amount and kind of code that has to be written to do anything interesting is so contrived and turgid. A simple and small system that can do interesting things also needs a high slope that is a good match between the degree of interestingness and the level of complexity needed to express it.

Interesting. I really like that. This idea of a high slope. So in other places he does talk about the slope of Lisp, that in a few lines of code, you can define a Lisp interpreter, and now you're programming at a higher level really easily. So a few lines of [01:09:00] code, we're now, you're up here, you're doing, you know, recursion and Lambda and stuff.

And, um, so that's one of the things that's needed. And you don't get that in, in most systems, right? You can define the Turing completeness, but you don't have this high slope. You're stuck in this crappy machine code you have to write. All right? He talks a lot about politics at Xerox park. Um, I'm going to skip through that.

I mean, it's really interesting stuff, but it doesn't really. Helped to answer the question, but he was very involved in, you know, dealing with management, getting money, all that stuff. Okay. Now he's talking about Smalltalk and there's a version out, and here we go. By this time, most of Smalltalk schemes had been sorted out into six main ideas that were in accord with the initial premises in designing the interpreter.

Okay. I'll read the six. [01:10:00] They're off on the side

1. Everything is an object.

2. Objects communicate by sending and receiving messages. In terms of objects, so messages themselves are objects.

3. Objects have their own memory in terms of objects. So the stuff inside the object is also objects. Okay. Objects.

Everything is object. That's the number one. Okay.

4. every object is an instance of a class, which must be an object. He just like really reiterating this idea.

5. The class holds the shared behavior for its instances in the form of objects in a program list. So even the behavior is defined as objects, the methods, and the code is objects. Everything's an object.

6. To eval a program list, control is passed to the first object and the remainder is treated as its message.

[01:11:00] So here's the thing we were talking about that the object gets the entire message unevaluated the whole, it's whatever the user typed in basically gets passed to that object and it's up to the object to.

Can cause controllers pass to it. It's up to the object to interpret it. Okay.

The first three principles are what objects are about, how they're seen in use from the outside. This is everything is an object, communicate with messages, and they have their own memory. These did not require any modification over the years.

The last three objects from the inside were tinkered with in every version of smalltalk and in subsequent OOP designs in this scheme, one in four. Okay. Everything is an object and every object is an instance of a class. Imply that classes are objects and that they must be instances of themselves. Six implies a list like universal syntax.

This is the interpretation one, but [01:12:00] with the receiving object as the first item followed by the message. So this would be like, um, if you had a function that. Took all of its arguments unevaluated to call by name and had to evaluate them as it saw fit. And so. He's, he's going into some of the deeper consequences of this and how it wasn't obvious to go all the way.

Simple expressions like A plus B and three plus four seemed more troublesome at first. Did it really make sense to think of them as a, the receiver and plus B, the message or three, the receiver and plus for the message? It seems silly if only integers were considered, but there are many other metamorphic readings of plus, such as the, as matrices, right?

Uh, he has a matrix and then he does a, um, [01:13:00] plus four to the matrix, and so it adds four to all of the, the elements of the matrix. This led, so it's still plus, right, but it's operating on a different receiver. This led to a style of finding generic behaviors for message symbols. Polymorphism is the official term.

He believes derived from stretchy, but it is not really apt as its original, meaning applied only to functions that could take more than one type of argument. Okay. Since control is passed to the class before any of the rest of the message is considered, the class can decide not to receive at its discretion.

Complete protection is retained. So this is where, sorry. The previous thing was where he was really thinking about, does it make sense to pass plus two, a number and plus four as a message. You know, this was, [01:14:00] um, a hard thing to. To to get over. Right. I still think it's kind of weird that you would pass a message to plus like does the number is, is addition a property of numbers, right?

Is it a behavior of numbers or is it something outside of numbers? I mean, philosophically it, it seems, it seems like a useful thing to, to ponder. Okay. So now he's talking about there's in smalltalk 71. Um. And then, and then turning into Smalltalk 72 that they had this idea of commingling function in class ideas.

They were still in this world of functions such as factorial. So factorial could be written extensionally as, and so he has this definition of factorial N like a [01:15:00] function or intentionally as part of class integer. So it's a method on integer.

Of course, the whole idea of Smalltalk is to define everything intentionally.

So they still had this stuff happening where they would write a function like factorial and eventually decide, no, it should actually be part of integer. And that became an important idea in Smalltalk.

Okay. So, um, he's talking about developing system applications. There are a bunch of cool applications that they, they wrote

one of the small program projects I tried on an adult class in the spring of 74 was a one page paragraph editor,

one page meaning one page of code that was, that was like an ideal that they had that you could write it in a small.

You know, there's [01:16:00] no scrolling in the code. You could just see it all. Okay.

It turned out to be too complicated for this adult class, but the example I did to show them was completely modus and became the basis for much of the Smalltalk text work over the next few years. Of course objects mean multimedia documents, you almost get them for free.

Early on, we realized that in such a document, each component object should handle its own editing chores.

Man, this, this is the kind of thing that upsets me. Remember I said before I get depressed and I don't know about you, but when I use an object oriented programming language, it does not feel like I get multimedia documents for free.

Right, but this is what he's saying here, that you have these letters in your document. Each letter is an object, right? Everything's an object. Well, that the letters are being laid out somehow. Maybe they're laying themselves out [01:17:00] and the letter is just like a rectangle with some pixels in set in it. Why can't you just drag a picture in?

It's also a rectangle with some pixel set. Why can't you drag a video in? It's just a rectangle. You know? It's polymorphic, this idea of it taking up space in the document, being laid out with everything seems to fall out for free of doing everything as objects all the way down. Whereas like when I program in Java, like, Ugh, multimedia editing is not easy.

So this is one of those things where he's like, this Java is not doing this. Somehow we lost it. We got really fascinated with this idea of a module system, the classes that have API APIs and stuff, but we lost this idea of [01:18:00] objects mean multimedia documents. You almost get them for free. Oh my goodness.

This is really emotional for me. I'm going to turn the page. Okay. So he's talking about an object oriented style developing.

This is, uh, is probably a good place to comment on the difference between what we thought of as OOP style and the superficial encapsulation called abstract data types that was just starting to be investigated in academic circles.

Our early Lisp pair definition is an example of an abstract data type because it preserves the field access and field rebinding. That is the hallmark of data structures. Considerable work in the 60s was concerned with generalizing such structures. The official computer science worlds started to regard simula as a possible vehicle for defining abstract data types, and it wa [01:19:00] and it formed much of the later backbone of ADA.

Okay. Let me. Talk about this for a minute. So, um, he talked about this lisp pair. He has code for it, a conc, right? It has a car and a cutter, a first in a, an arrest, you could say. A head and a tail. Uh, there's different ways to, to define it, but it's an abstract data type because it has getters and setters, field access and field three binding.

And so what it means is that the code using the cons gets to decide how to use it, right? I want to, I want to upset this field. Read that field, right? It's not a higher level concept. It's an abstract data type. Okay.

So this led to the ubiquitous stack data type example in hundreds of papers, to put it mildly.

We were quite amazed at this since, to us, what simula had whispered was [01:20:00] something much stronger than simply reimplementing a weak and ad hoc idea.

Okay. So he's, he's saying like even this, this misunderstanding of the power. They had, let me just put it, Alan Kay had the insight. He shared it with his team, and then when they heard what other people were doing with the similar materials, they had similar now, just like he had, they were surprised that they didn't get this more powerful idea.

What I got from simula was that you could now replace bindings and assignment with goals. The last thing you wanted any programmer to do is mess with internal state.

Even if presented figuratively, even if you put a getter and center on it, yeah. You're passing a message as a method call, whatever. It's still just getting it right out of the memory.

You're still leaking out all the [01:21:00] internals.

Instead, the objects should be presented at sites of higher level behaviors, more appropriate for use as dynamic components. Okay. So this is where, you know, uh, I'd love to talk about linked lists, right? So how does Java define link lists, uh, as compared with Lisp?

Just for instance, um. In Java linked list is a class right inside. You might have an internal class and interclass called node something, but the list is presented as a with with an API and then inside it's developing this data structure inside and all the methods in there know how to walk the list.

They have for loops that manage the pointers and stuff.

[01:22:00] Whereas in Smalltalk, they define it much more in the way, where the, the cons sell knew how to respond to the length message. Okay. And it was defined recursively the length mess method did not loop through the, the tail, uh, you know, with a, an accumulator variable, adding one to it until it got to the end.

It just said, I'm going to add one to whatever my tail responds with length to the length method. Right? So it is defined recursively. It is defined in terms of behaviors declaratively as opposed to this very imperative style of, you know, initialize a variable to zero loop. You know, until, until the pointer is null, uh, you know, add one each time, which is, you know, sadly, it's how things [01:23:00] are typically done.

This is really showing this difference between a data structure approach and a goal or behavior approach that Alan Kay wants to, wants to talk about. He wants to define what does it mean to calculate the length? Well, it's just one plus the rest, the length of the rest of the list. Right? And so, you know, the interesting thing is, at any point that cons cell, if it responds to the right messages, could be replaced with another object that responds differently.

You could replace it later. That's what that level of indirection gives you.

Okay. So, um, he's, uh, he's really, I'm gonna read this again.

Instead of the objects should be presented as sites of higher level behaviors, more appropriate for use [01:24:00] as dynamic components, higher level behaviors, not assignment getters and setters.

Uh, just give you. A slightly better way of managing your memory, right?

He wants some higher slope like you talk about before. Where does the special efficiency of object oriented design come from? That's a good question, but. Just want to note that special efficiency because he actually gives examples soon of like how efficient it is. Okay.

This is a good question. Given that it can be viewed as a slightly different way to apply procedures to data structures, part of the effect comes from a much clearer way to represent a complex system here.

The constraints are as useful as the generalities for techniques used together. Persistent state polymorphism instantiation and methods as goals for the pro for the object account for much of the power. [01:25:00] None of these require an object oriented language to be employed. ALGOL 68 can almost be turned to this style and.

*Object oriented programming language merely focuses the designer's mind in a particular fruitful direction. However, doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented medicine metaphors from programming. [*I'll read that again.] *Doing encapsulation, right, is a commitment not just to abstraction of state, [*so you can't just put getters and setters and call it encapsulation.]

But to eliminate state oriented metaphors from programming.

It shouldn't. You should. You have to use a new paradigm, a new metaphor. The state oriented is procedural. It's imperative is what we get from machine code. Yes. You store stuff in memory. You pull it into registers, you, you add to it, you store it back in the [01:26:00] written the memory.

We've got to get rid of that. We got to move higher. Right. Okay. I believe that much.

I believe that the much smaller size of good OOP system comes not just by being gently forced to come up with a more thought out design. I think it also has to do with the bang per line of code you get with OOP, the object carries with it a lot of significance and intention.

Its methods suggest the strongest kinds of goals they can carry out. It's super classes can add up to much more code functionality being invoked than most procedures on data structures.

That's inheritance, right? Super classes,

assignment statements, even abstract ones express very low level goals and more of them will be needed to get anything done.

Generally, we don't want the programmer to be messing around with state, whether simulated [01:27:00] or not. The ability to instantiate an object has a considerable effect on code size as well.

So I think in, and what he means here is, um, just the idea of being able to, in one statement, get some memory that you can now use that has all the behavior attached.

You don't have to like. You know, if you had to do that with like a Malik and set it up, you'd be wasting a lot of code. This idea that you just have an object and it does what it needs to do is a savings.

Another way to think of all this though, the late binding of automatic storage applicate allocations doesn't do anything a programmer can't do.

Its presence leads both to simpler and more powerful code. OOP is a late binding strategy for many things and all of them together hold off for agility and size explosion much longer than the older [01:28:00] methodologies.

Okay. He's making a lot of bold claims, but they're, they're just there. There's something missing, I believe.

I believe that there isn't yet a good theory. As to how they achieved such, uh, of such efficiency of code. They have such little code for this system, and yet it had an animation system. It had, um, you know, animation editor. You could draw pictures and like have the it was doing music. It had a video display.

It was the whole system. It was the operating system.

I think that what I'm looking for myself is a personal quest is to figure out how is it that these systems can be so [01:29:00] efficient because the, what he's showing is much more like how is it that, how is it that Mozart was able to create such beautiful music? Right. We don't really know. Right. We know that he was trained well and he's like, how was Alan K and his team able to build this in in a short time?

How is that even possible? We don't know. We don't know this. He's, he's pointing at stuff. He's trying to explain it, but I don't find any of these statements that satisfied. Right? Can you learn to do it? Can you teach someone else to do it?

It's, it's just hard. You could teach someone to play music, but we don't know how to teach someone to be Mozart. Okay.

It started to hit home in the spring [01:30:00] of 74 after I taught Smalltalk to 20 park, non programmer adults here.

He's talking about how difficult, Mmm. Programming is how it's not. You know, they, they as, as a group of programmers building this really cool system, they were surprised when they brought it out into the world and found that it was not that easy.

They were also, they were all they were able to get, so this is the group of non programmer adults.

They were able to get through the initial material faster than the children, but just as it looked like an overwhelming success was at hand, they started to crash on problems that didn't look to me to be much harder than the ones they had just been doing well on.

One of them was a project thought up by one of the adults, which was to make a little database system that could act like a card file or Rolodex. They couldn't even come close to programming it. I was very surprised because I knew that such a [01:31:00] project with well below the mythical two pages for end users we were working within that night.

I wrote it out and the next day I showed all of them how to do it. Still, none of them were able to do it by themselves. Later I sat in the room pondering the board for my talk. Finally, I counted the number of non-obvious ideas in this little program. They came to 17 and some of them were like the concept of the arch and building design.

Very hard to discover if you don't already know them.

So 17 non-obvious ideas in less than two pages of code.

They were realizing how. Difficult programming is even if you give someone an amazing system, an easy syntax, a totally graphical and visual. Okay? The connection to literacy [01:32:00] was painfully clear, so it isn't enough just to learn to read and write and be, these people were learning to program. They, they could write the syntax, they could read it.

There is also a literature that renders ideas. Language is used to read and write about them, but at some point, the organization of ideas starts to dominate mere language abilities. Okay? So at some point, it's not about being able to, to read a sentence, you have to now work with the ideas that the book is giving you.

And that's, he's making this analogy that the programming, there's big ideas in programming that you need to learn. You can't just sit down in an afternoon and, and discover them like you, there needs to be a, a tradition of, of teaching this stuff. Okay. So, um, that was sort of the beginning of the end, you know, things were winding [01:33:00] down.

Um, all right. I'm gonna, I'm gonna skip quite a lot of that, but let me, let me just talk about inheritance.

Unfortunately, inheritance though an incredibly powerful technique has turned out to be very difficult for novices and even professionals to deal with.

I just want to mention that. I agree. I think inheritance is one of those really difficult ideas.


At this point, let me do a look back from the vantage point of today. [So remember, this is 92 published in 93] I'm now pretty much convinced that our designed template approach was a good one. After all, we didn't, we just didn't apply it longitudinally, longitudinally enough. I mean by this that there is now a large accumulation of results from many attempts to teach novices programming.

They all have similar stories that seem to have little to do with the various features of the programming languages used and everything to do with the difficulties novices [01:34:00] have. Thinking the special way that good programmers think. Even with a much better interface than we had then and have today. It is likely that this really is actually more like writing than we want it to be, namely for the 80% who don't get it right away.

It really has to be learned gradually over a period of years in order to build up the structures that need to be there for design and solution look ahead.

It just needs time. You need to be, uh, you need to see these ideas over and over, uh, and be able to, to be able to do the design and solution. Look ahead. Should we even try to teach programming. I have met hundreds of programmers in the last 30 years and can see no discernible influence of programming on their general ability to think well or to [01:35:00] take an enlightened stance on human knowledge.

If anything, the opposite is true. Expert knowledge often remains rooted in the environments in which it was first learned, and most metaphorical extensions results in misleading analogies. A remarkable number of artists, scientists, philosophers are quite dull outside of their specialty and one suspects within us within it as well.

The first siren song we need to be wary of is the one that promises a connection between an interesting pursuit and interesting thoughts. The music is not in the piano and it is possible to graduate Julliard without finding or feeling it.

Ah, so I believe that this is one of these self-referential statements where he's, he's realizing that he's special. I mean, he is special. He [01:36:00] started reading when he was two. He says he's read no more than 20,000 books. He reads like 400 per year. And. He's realizing that most people can't make these connections like he can, right?

Like he's been in this privileged space, Xerox PARC, working with brilliant people, and he tries to bring these ideas out and even programmers can't deal with, they can't make the connections that he's making. They can't put together these beautiful algebras that he's been talking about. He's, he's trained in abstract algebra.

He understands, uh, he understands this, these ideas of higher abstract, uh, forms, right? [01:37:00] He, uh, that have behaviors that, that are described, you know, in this abstract way. Um. Then properties. He knows about biology and how the body, your organism can get stuff done. Even though no piece knows the hole. It just knows the little bitty parts and they communicate with hormones and other signals and stuff gets done.

You're, you survive, right? You adapt to intruders and stuff. He knows how that's done. And most people don't. And even if they do, they can't extend it outside of their field. So, um, he's, he's feeling this like there's some success, but it's not as big as they want. So there he's thinking like, okay, we learned a lot.

Let's start over. Let's burn the disc packs as he says, [01:38:00] like, get rid of no code. We're going to start over and build it all from scratch. Okay, so they're talking, they had this retreat. They're talking about what to do. We were all agreed that the flexible syntax of the earlier Smalltalk's was too flexible.

And this level of extensibility with not desirable. So remember all this talk about, we want to be able to pass the entire, the entire program is a message to that. Receiver and it will interpret them and evaluate it in the original environment, do what it will with it. He's realizing it's too flexible.

It was amazingly powerful, but that that much extensibility was not desirable. So, um, Dan Ingles. Well, who is the main programmer of the Smalltalk system? [01:39:00] Um, he came up with this keyword operator syntax, so it was flexible, but it let it be read on ambiguously. Uh, and then of course, his, his compiler was 180 times faster than what they had before.

Um, which is pretty cool. Uh, but I th I think that this is really fascinating. He spent so much time talking about this influence of like, find this one basic monad idea that, that they can bloom into, you know, every possible system. And it was that one idea was the. Was this idea of the, the object as interpreter of the rest of, of the sequence of commands, right?

It interpreted everything. And so he could choose whether a was evaluated or not and et cetera. And then they discard it. They decide is [01:40:00] too flexible. And so they go to something that's much more like a classical, um, like virtual machine kind of model. And it's faster. Of course. Um, okay, so they're talking about, uh, inheritance, whether to keep inheritance.


on the other hand, since things can be done with a dynamic language that are difficult with a statically compiled one, I just decided to leave inheritance out as a feature in Smalltalk 72, knowing that we could simulate it back using Smalltalk's Lisp-like flexibility.

So, um. Then what happens is by the time Smalltalk, 76 came along.

Dan Ingalls had come up with a scheme for inheritance. I was not completely thrilled with it because it seemed that we needed a [01:41:00] better theory about inheritance entirely and still do, for example, inheritance and instancing, which is a kind of inheritance muddles both pragmatics, such as factoring code to save space and semantics use for way too many tasks such as specialization, generalization, speciation, et cetera.

Okay, so what is he's saying? Is that inheritance is this kind of catch all. Code reuse thing, um, including instancing. So all the objects of a particular class, all the instances of particular class are sharing code. They're getting reuse through the instancing. Right. And then also there's, um, well we just need to save space cause they had very small machines back then.

So they would factor stuff into superclass is just for the space saving, just to have fewer lines of code. And semantics. So they're like, well, this [01:42:00] is a specialization of this. So you know, it's an is a hierarchy. And then this is a, what's another one? Generalization. Okay, we'll move this up until the general idea.

Um, but then what happened?

No comprehensive and clean multiple inheritance scheme appeared that was compelling enough to surmount Dan's original, similar, Simula-like design.

So they just couldn't find anything better. So that's, that's what happened in smalltalk. They have no good theory of inheritance. All right?

Now they're talking about the user interface.

All of the elements eventually used in the Smalltalk user interface were already to be found in the 60s they were just out there, but in different labs.

Right. Um, okay. So early on he's talking about all these different. Uh, focuses on children early on.

This led to a 90 degree rotation of the purpose of the user interface from access to [01:43:00] functionality to environment in which users learn by doing.

That's where the GUI came from. You learn by doing, you click something, a undo that that wasn't what I want. Maybe this button now learn by doing, you know, undo that. Um, that's great. Right. Okay. So now he's talking about how, how this all came together.

The consolidation can certainly be attributed to Dan Ingals for listening to everyone contributing original ideas and constantly building a design for user testing. I had a fair amount to do with setting the context, inventing, overlapping windows, et cetera, and Adele and I designed most of the experiments.

He's talking about all these influences and how they all came together. he says the context almost forced a good design to turn out anyway, like they meandered, they didn't understand it, but just the context of having [01:44:00] kids and doing a lot of user testing for us to good design to turn out.

Anyway. All right, so

Smalltalk. 76 was finished in November. It was fast, lively, could handle big problems and was great fun.

Okay, now listen to this. This is going to make a mess. Makes me sick. Okay.

The system consisted of about 50 classes described in about 180 pages of source code. This included all of the OSTP functions, files, printing, and other ethernet services, the window interface, editors, graphics and painting systems, and two new contributions by Larry Tesler. The famous browser for static methods and the inheritance hierarchy and dynamic context for debugging in the runtime environment.

50 classes, [01:45:00] 180 pages of source code. My God. How is it so small? You know? He was talking about the efficiency, like I know some Java programs that like do nothing and 50 class, like 50 classes.

How could you do something in such a small space? It's, it's insane. It's insane how 180 pages of source code, you could read that. You read the whole thing. I like, I have a markdown parser library that I use and I've had to look at the source code. It's in Java. It's like 200 classes just to parse markdown.

I don't know. I, it just really bugs me that we lost this. Don't know how to do it. We do not know how they did it. It's lost. Like we do know. [01:46:00] They documented it all. It's in books. Right. But we don't know it. Like we don't use this every day as part of our programming a part of our jobs. Okay. So, um, I like to bring this part up cause this is where Steve jobs comes in.

So Steve jobs, uh, Jeff Raskin and some other technical people from Apple, they, uh, they get a demo. They were doing lots of demos, but these were the people who. Uh, finally,

finally did something thus more than eight years after overlapping windows had been invented. And more than six years after the Altos started running

Alto, I think they called it the people who could really do something about the ideas.

Finally got to see them. So, um, you, you can see this, Steve jobs until he died, was still talking about how. It all came [01:47:00] from Xerox PARC object orient programming, uh, the graphical interface and NEC networked computing, which they're still kind of working on with iCloud. Right? Um, all of that, which is like basically defines Apple's business today was, was there, and they were just trying to commercialize it.

You know, Polish it and make it, make it a user facing product, but it was all there. Okay, so one final comment. I'm still reading.

Hardware is really just software crystallized early. It is there to make programs, schemes run as efficiently as possible, but far too often the hardware has been presented as a given and it is up to software designers to make it appear reasonable. This has caused low level techniques and excessive optimization to hold back progress in program design.

Yeah. [01:48:00] Software hardware is just talk to her crystallized early. Um.

You know, he talks about you would wire your programs. Oh, who's he's? I'll get to that. One way to think about progress and software is that a lot of it has been about finding ways to late bind, then waging campaigns to convince manufacturers to build the ideas into hardware. Early hardware had wired programs and parameters.

If you wanted to calculate something, the sign of some number, you would have to. Take the wires and connect them up to make the sign program, and then you'd have to wire up your parameters that whatever number it is you're passing to the sign function as an as another thing. And that's what, that's what programmers did, is they like just connected wires together.

Okay. Random access memory was this scheme to late bind them. Hey, now we don't [01:49:00] have to have the parameters as wire. We can store that as. You know, bits in a memory. Looping an indexing used to be done by address modification in storage index registers were a way to late bind pay. So he's just talking about how the development of all these hardware things that we take for granted today, they were not obvious.

They were ways of making the software run better, not the other way around. It wasn't. Here's the new system we designed, the new Pentium, whatever, the new, I don't know what they I seven chip and here's it's instruction set software. People get going. It's the opposite. The software people used to say, well, we need this to run faster.

We need a faster way to do registers or not faster. We need. We need this to be late bound. We don't want to determine [01:50:00] ahead of time how, uh, where our loop is going to be. Like what a Ray. It's gonna be done in, we want an a register that tells us that we can jump around in it. All right. Uh, I'll, I'll get over this from the lead binding perspective.

OOP can be viewed as a comprehensive technique for late binding as many things as possible. The mix of state and process in a set of behaviors where they are located, what they are called, when and why they are invoked, which hardware is used, et cetera. And more subtle. The strategies used in the OOP scheme itself.

The art of the wrap is the art of the trap.

art of the wraps, the art of the trap.

What does that even mean? So. He wants this idea is that relate binding, all this stuff. Um, what, what objects do we [01:51:00] need? Where are they going to be located? We don't, we want to late by now. We don't want to decide now. We don't want to specify like some programming languages.

You had to specify like here's where all the data is going to be stored, right? You have to specify the range that you'll need laid bind. That will determine that later. Not even. I compiled them later. It was the lead as possible when we need it. We'll tell you what they're called. What are their names of all these objects, when and why they're invoked.

We don't know when we're going to evoke it. We might not. We might, you know, we're going to type a program and it'll invoke them, or it might not. We don't know which hardware is used. All of these things are just being laid bound, but then more subtle. The strategies used in the OOP, OOP scheme itself.

This is, this is key, but [01:52:00] it's very subtle. He wants to be able to change how inheritance works. He wants to be able to change how method look up happens late binding after the program already exists. After it's been compiled and written, he wants to change it. Okay? You want to be able to trap that intention, wrap it up in an object.

Okay. So, and then he talks about like what hardware would look like that would make this idea easier. Okay. Cause you know, he talks about you don't want the numbers three and four to be wrapped. Because it's too much wrapping right to the, we call it boxing now. Um, but you want it to somehow have a way of like trapping that message, send and, and doing something so that it compiles down or what runs on the machine is very close to what would [01:53:00] have happened if you just did the plus operator in your machine language.

Okay. Won't go into that. A 20th century problem. He's concluding here, right? This is all winding down a 20th century problem is that technology has become too easy. When it was hard to do anything, whether good or bad enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well.

This is inverse vandalism, the making of things, because you can, as opposed to destroying things, because you can, as vandalism, couple this to even less sophisticated buyers.

That's us.

And you have generated an exploitation marketplace similar to that set up for teenagers.

Ah, right. [01:54:00] So when it was hard.

It was better. That's what he's saying. It's too easy. Um, we get trivial designs when people had to make stuff the hard way, then they did a better job. I don't know if this is a good conclusion to make from what heat from his story, and I want to say that. Because his lab was the only one who created anything like this.

She, you know, there were a lot of cool influences or people working on really important stuff, but they were few. And people who were programmers, he's, he said this professional programmers had hard times with stuff. So. [01:55:00] I just can't see that. Um, technology has become too easy. Basically, they were making bad designs even when it was harder.

Right. It's not like the easier, the worse. Right. It was already bad. It took it, it took this, you know, very special group of people to make this. So, uh, I don't know if I believe this. I think that there's, there's some kind of truth in it. Like constraints are a good thing. Um, but

this, it just strikes me too much as like, things were better before because things were worse. It was better when it was worse, and, Mmm. I don't think that it has to be the case. Uh, I [01:56:00] think what he's saying maybe is that as a programmer today, it's, uh, if you are a really great programmer today, it's so tempting just to make bad designs because you can, you can make something.

But with a bad design and, and the constraints that they were under led were like a crucible. They led good people. To make better stuff. Maybe that's a more generous interpretation. Okay, I'll leave it there. Um, okay, so this is the last thing I'm going to read from him. It is incredible to me that no one since has come up with a qualitatively better idea.

That is a simple, elegant, easy to program, practical and comprehensive. Where are the Dan's and Adele's, that's Adele Goldberg. Dan angle's of the eighties and nineties that will take us to the next stage.

[01:57:00] We don't have a more powerful idea. He's not saying it is. He, you know, is incredible to him. He wants someone to come up with a better idea. He wants someone to invent the future, but no one has. there, you know, he's saying the pinnacle was like smalltalk 80 and here we are in 2020 and we don't have a better system.

So read this paper, uh, I only read you a short bit, even though this took me what. Almost two hours. Read this paper. The early history of Smalltalk by Alan Kay. Uh, incredibly important in understanding how we got where we are, both in terms of the computers we use every day. Everybody uses their phones, everything and, uh, programming languages and how [01:58:00] we program.

So if you like this, please go to you can subscribe. Uh, I'd really appreciate any, um, ideas for papers I should read, you know, read to you, read to my audience here. Um, and, uh, if you know someone who might be interested in this podcast, please tell your friends about it because, uh, share the love.

If you liked it, they'll probably like it too. All right. Thank you so much. My name is Eric Norman. This has been my thought on functional programming. Okay. And as usual, thanks for listening and rock on.