The Humble Programmer

We read from and comment on Edsger Dijkstra's 1972 Turing Award Lecture called The Humble Programmer. Is the problem with programming that we don't recognize our own limitations? We'll explore that and more. Audio recording of lecture.

Dijkstra Archive

Transcript

Eric Normand: "We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmer."

Hello, my name is Eric Normand. Welcome to my podcast. Today, I am reading from Edsger Wybe Dijkstra's 1972 Turing Award lecture. I'm not going to read the whole thing, as I usually only excerpt sections that I think are important. I think you should read the whole thing, but I'm going to excerpt, read from those excerpts and then comment on them.

As I usually do with the Turing Award lectures, I'm going to start by reading from the biography. He's done a lot of cool stuff, so there's a lot to say. The biography is usually very interesting, gives you some context and explains what was important about the person who won the award.

This was in 1972. He was living in the Netherlands at that time. He later started working at University of Texas, Austin. That was in the 90s, so this was before that. He was born in 1930. That puts him at 42 when he won the award, so not unusual for these awards.

So far, they've happened around 42, and it's interesting that already in 1972 he was considered important enough to get this. I did not know that. I know that he was mentioned many times in my computer science education, and for inventing the Dijkstra's algorithm for instance.

There are three things listed for why he was awarded the award. "For fundamental contributions to programming as a high intellectual challenge." that's very interesting. Perhaps, reading as a historian, this means that at some point there was a recognition that software was as important as hard — if not more important — than hardware.

I know several of the early award winners, the Turing laureates, they were more focused on the machine. They were more focused on hardware, and even the first few emphasized how important it was that ACM was called the Association of Computing Machines.

Dijkstra seems to have a very striking conflict with that. We'll get to that. Number two, for why he won the award, "For eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness."

He has a very strong voice about how we should be programming. One of those aspects is that we need to be using proofs and building them correctly the first time, instead of debugging them. That's interesting. I don't know how...when I program, there's a lot of debugging. [laughs]

[inaudible 4:29] the first way, the first time. I'm not sure if what he said has had the effect that he thought it would, because this was 50 years ago now, 49 years ago.

"For illuminating perception of problems at the foundations of program design." OK, that's interesting, illuminating perception of problem at the foundations of program design, very cryptic. Let's read on. I'm going to read little sections from the biography.

"At the mathematical center, a major project was building the ARMAC computer. For its official inauguration in 1956, Dijkstra devised a program to solve a problem interesting to a non-technical audience. Given a network of roads connecting cities , what is the shortest route between two designated cities?

"The best known algorithms had running times which grew as the cube of the network size. The running time of Dijkstra's algorithm grew only as the square. Developed in 20 minutes, while Dijkstra was relaxing on a café terrace with his fiancé, Maria C. Debets.

"His shortest path algorithm is still used in such applications as packet switching software for computer communication." Yeah, it's used way more than just in that. This is what's known usually as Dijkstra's algorithm.

It is still the definitive shortest-path algorithm. I think that it is really telling [laughs] of this person's mental capacity, his intelligence, that he could do it in 20 minutes, whereas other people hadn't come up with this.

He has a special mind that he could do this. Dijkstra's algorithm is not complicated. If you read it, it's a pretty normal For loop, right? Or a While loop. You could write it yourself. Could you write it? Could you come up with it yourself?

That's a different question. OK, all right. "Around the same time, Dijkstra invented another very efficient network algorithm for use in designing the X1 computer known as the minimum spanning tree algorithm.

"It finds the shortest length of wire needed to connect a given set of points on a wiring panel. He published both network algorithms in a single paper in 1959." very cool. Now, one thing is, you could look at this as like, "Well, in the '50s."

Come on, software was so new that there was all this low-hanging fruit. Anyone around who is smart at the time, could have just been picking up these easy algorithms that we use today. Possibly, I'm not sure if that's why he won the award and other people didn't.

Other people had tried these algorithms, similar algorithms, they got cubic growth and he only had quadratic growth, pretty good. All right, let's continue. "At the mathematical center, Dijkstra and J. A. Zonneveld developed the first computer for ALGOL, a compiler for ALGOL 60.

"ALGOL 60 was a very designed language, it was designed as a specification that could run on platform-independent. This was significant that he was writing the first compiler. He was probably the first to introduce the notion of a stack for translating recursive programs, reporting this seminal work in a short article.

"In the Oxford English Dictionary, the terms vector and stack in a computing context are attributed to Dijkstra." OK, again, a very important contribution, that we should use a stack, the idea of a vector which is a sized array.

"In 1968, Dijkstra published Go To Statement Considered Harmful, arguing that the Go To Statement found in many high level programming languages is a major source of errors and should therefore be eliminated.

"There was a huge commotion about this. It's a very, very important paper establishing the idea of structured programming and its benefits. Dijkstra was beginning to formulate some of his early ideas about programming as a mathematical discipline.

"He pointed out that software productivity and reliability is closely related to rigor in design, which eliminates software flaws at an early stage."

Very interesting, I think that there's still thread of that in modern computing, people who want formal methods for testing and proving that your software is going to do what you think it should. He was early in that.

Then final thing I'm going to read from this. "Dijkstra was the first to observe, not only that non-determinacy is central in computations whose components interact asynchronously, but also that even when no asynchrony is involved, non-determinacy is an effective tool for reasoning about programs and simplifying program design."

Another important thing, and this happened after he got the Turing Award. His Turing Award lecture is called "The Humble Programmer." Now, some people might call this ironic, because he does not appear to be a humble person in his speech and actions.

In fact, Alan Kay has quipped that arrogance is measured in nano Dijkstras. [laughs] That unit of arrogance is nano Dijkstras, which would imply that Dijkstra has 1,000 nano Dijkstras or one Dijkstra. It's ironic. We'll maybe understand what he means by the humble programmer by the end of these excerpts, and hinted at it in the intro.

Let's just read a little bit from it. "As a result of a long sequence of coincidences, I entered the programming profession officially on the first spring morning of 1952. As far as I have been able to trace, I was the first Dutchman to do so in my country."

That's really interesting that he hasn't found anyone earlier than him in the Netherlands, which is quite possible. This is very early 1952, and a lot of the work had been done in the UK and in the US.

He's having an early career crisis. He wants to do physics, and he wants to do computing, but he figures that he can't do both. He's thinking of dropping one and he's now asking, "Should I drop physics and just go into computer science?"

He's talking to his professor, "But was that a respectable profession after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline?

"I remember quite vividly how I envied my hard-work colleagues, who when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that when faced with that question, I would stand empty handed."

He's lamenting that there's no sound basis at that point in the '50s for programming. That there was no curriculum, no known body of knowledge, something that you could point at and say, "I am qualified because I know all this stuff."

I feel like that's similar today. We do have computer science curricula at universities and things, but as we know in the industry, a computer-science degree is not required. It's considered helpful, but I guess computer science departments aren't creating enough graduates.

Companies have to accept people who don't have those degrees, and they often say that they do quite well anyway, that it's not required.

My wife is a doctor. This is purely personal. They have a large body of knowledge that they consider essential to practicing as a doctor and you have to learn it.

It's just pure. Like, let's learn all this stuff, and it's very rigorous, very difficult. While I was watching my wife study and she would talk about what she was studying, I was jealous. I wish that we had that in our profession.

Not to gate keep. Not to say, "If you don't know this, you can't program." As a software engineer, let's say. As a way to point people to like, "This is what you need to know." This has been evaluated, and there are studies of medical schools who they have this big curriculum, and then they graduated people in the curriculum.

Then they do studies where they go talk to those graduates two years later and they say, "So all that stuff we taught you, what are/is still useful to you today after two years?" They give them the subset that they feel like, "This is the stuff I'm still using, that other stuff..."

They're trying to make it smaller and figure out what was actually important in there. I don't know if we're doing that in computer science, but I feel jealous. I feel envious when I hear of other fields that has like a list like, "You definitely need this. You definitely need that."

Of course, this was so early, programming...There was no way you even had people attempting computer science curricula at universities. That's just my personal reflections on that. He's talking to his professor, van Winjgaarden. He gives him some good...He's talking about his interactions with him.

"After having listened to my problems patiently, he agreed that up until that moment, there was not much of a programming discipline.

"Then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning. Could not I be one of the persons called to make programming a respectable discipline in the years to come?"

This is a very insightful professor, perhaps. I have heard this story from Dijkstra before, and I've taken this kind of thing to heart. Whenever someone complains, like I was just complaining, that perhaps they're like, "This isn't a rigorous discipline, there's all these problems with the field." that's my advice to them.

It's like, "You're the one recognizing the problems. You're obviously passionate, interested in it, maybe you have something to contribute because of that, because you have this perspective on the problem." I put that in here because I feel like this story affected me and I want to share it.

Then, of course, this sets him on a path, because he doesn't sleep that night and makes the decision right away that, "Oh, yes. I'm going to be the one to make this a rigorous discipline." I feel like it does kind of set the tone for everything he does later, Dijkstra that is.

All right, let's talk about some context. He wants to explain that how in 1972, programming has a history that determines how it is viewed and that kind of needs to change. Let's talk about that context.

Let me try to capture the situation, those old days, in a little bit more detail, in the hope of getting a better understanding of the situation today.

While we pursue our analyses, we shall see how many common misunderstandings about the true nature of the programming task can be traced back to that now, distant past.

Like I said, he's setting the context with this history. We'll go through that different items that he thinks are important to understanding how programming was viewed in 1972.

The first automatic electronic computers were all unique, single copy machines. They were all to be found in an environment with the exciting flavor of experimental laboratory. We cannot deny the courage of the groups that decided to try to build such a fantastic piece of equipment.

Now we've gone over this before with some of our previous Turing lectures. The project is build one computer. You couldn't just Amazon, order one online. They were still experimenting with how the computer would work and how to structure its memory. What parts do you build it out of?

"In retrospect, one can only wonder that those first machines worked at all at least sometimes. The overwhelming problem was to get and keep the machine in working order.

"The preoccupation with the physical aspects of automatic computing is still reflected in the names of the older scientific societies in the field such as the Association for Computing Machinery or the British Computer Society, names in which explicit reference is made to the physical equipment."

I have touched on this before where previous, especially the very early Turing laureates, said, "The computer itself, the machine, is what ties us together." He is making a distinct break with that. They were saying we should keep calling it ACM, the Association for Computing Machinery. It sounds very archaic to us. He's saying, it's kind of an old name, and it shows this bias toward the machine.

Dijkstra has been known to say that computer programming is as much about computers as astronomy is about telescopes. It used to be if you were an astronomer, you probably had to be an expert in lenses and grinding glass and stuff.

While that's still very important to doing astronomy, you can probably just buy what you need, at least, unless you are making a new telescope. Yes, we're at this point where you can just buy computers, and the programming is different, and he's trying to emphasize that.

"What about the poor programmer? Well, to tell the honest truth he was hardly noticed." He uses "he" everywhere, I don't think there's a "she" in his whole thing. I apologize. The English language has been undergoing changes because we've become more conscious of the bias that that introduces into our thinking.

I'm going to try from now on to say he or she, but I might mess up. I might miss one. I apologize. This is a historical document, so it is the way it is.

"Well, to tell the honest truth, he was hardly noticed. He or she was hardly noticed. The first machines were so bulky that you could hardly move them and besides that they required such extensive maintenance that it was quite natural that the place people were trying to use the machine was the same laboratory where the machine had been developed."

We've all heard the stories about how big these machines were, taking up a whole floor of a building. You'd never move them. You'd just build them in place, and that's where it lives, and then they were so brittle that they would break all the time and you'd spend so much time maintaining it that that took up more time than the programming task.

"The programmer's somewhat invisible work was without any glamour. You could show the machine to visitors, and that was several orders of magnitude more spectacular than some sheets of coding."

Makes sense, you could have a glass wall and say, "Hey, look at that, all those machines, or this one giant machine. Look how cool it is. Isn't it impressive?" Then he'd show someone a few papers full of code, they don't understand that.

"The programmer himself or herself had a very modest view of his or her own work. The work derived all its significance from the existence of that wonderful machine." The code had to be written for that particular machine, because there was only one, and it was totally incompatible with any other machine. So that colors our view, or at least the view in 1972, of the importance of programs.

"The programs had only local significance, and also because it was patently obvious that this machine. would have a limited lifetime. They knew that very little of their view would have a lasting value." speaks for itself.

"The machine was usually too slow, and its memory was usually too small. Its code would cater for the most unexpected construction." The size of the machine, physically, was very big, but the memory was small and it was so slow. Then people still hadn't figured out what the machine's instructions should be yet and the best way to structure it.

Often, to solve an actual problem in software, you'd have to do this weird set of instructions to get it to work, and that was the art of programming at that time. It was translating this real world problem into something that your computer could execute.

"Two opinions about programming date from those days. A really competent programmer should be puzzle-minded and very fond of clever tricks. Programming was nothing more than optimizing the efficiency of the computational process in one direction or the other."

These are the two things that being a programmer is about being a puzzle solver, and then most of programming was about optimizing, because the computer was so small. I think that we still have remnants of these today. It's fading over time.

We still have this idea of puzzles, like, let's solve this problem in as few characters as possible, or look at this clever way of getting this, solving this hacker rank problem, and then optimizing the efficiency, we still think about that. I still hear talk of like, "Oh, this operation is slightly more efficient than that one." It's there, "Oh, this saves a branch." Things like that.

"One often encountered the naive expectation that once more powerful machines were available, programming would no longer be a problem. For them, the struggle to push the machine to its limits would no longer be necessary. That was all that programming was about.

"In the next decade, something completely different happened, more powerful machines became available, not just an order of magnitude, more powerful, even several orders of magnitude more powerful.

"Instead of finding ourselves in a state of eternal bliss with all programming problems solved, we found ourselves up to our necks in the software crisis." This deserves talking about more deeply. People were solving very — from today's point of view — simple problems. They were generating large tables of numbers that were hard to calculate by hand.

Things like that where it's math. They were doing math faster, and able to scale up the amount of math they could do in a given amount of time. People thought that, "Well, if so much of the problem is optimizing it, so that we can fit it into this tiny memory, or instead of making it take 72 hours to run this computation, we could get it down to 36 or 24 hours, that would be nice."

If we had a faster computer, people thought that it would magically happen. Instead what happened was, we had bigger ambitions for what software should do. By 1972, we started wanting computers to be interactive, and we wanted them to do other nonmathematical things like lay out our documents or manage more complex things like accounting and banking and stuff like that.

That it wasn't just let's simulate a nuclear explosion, or let's decrypt some encrypted military communication. Our ambitions are getting bigger.

Those ambitions are outpacing the Moore's Law of our computers getting better. If you wanted to write these early pieces of software, honestly, they might be simple For loops these days. [laughs] Maybe a little more sophisticated than that. Generating a table of the same formula on every row, we could probably write that in a few hours, took them a long time to do.

That has been solved but we have this other problem, what he's calling the software crisis, and we'll get to it a little bit later. He's trying to explain the causes. I want to focus on what he calls the major cause, "As long as there were no machines, programming was no problem at all. When we had a few weak computers, programming became a mild problem.

"Now we have gigantic computers. Programming has become an equally gigantic problem. As the power of available machines grew by a factor of more than 1,000, society's ambitions to apply these machines grew in proportion. It was the poor programmer who found his or her job in this exploded field of tension between ends and means." little highfalutin language there.

He's, basically, saying programmers are the ones who have to deal with the ambition that these new pieces of hardware are unlocking. "Then, in the mid-60s, something terrible happened, the computers of the so-called third-generation made their appearance.

"If your definition of price is the price to be paid for the hardware, little will prevent you from ending up with a design that is terribly hard to program for. For instance, the order code might be such as to enforce either upon the programmer or upon the system, early binding decisions presenting conflicts that cannot be resolved.

"To a large extent, these unpleasant possibilities seem to have become reality." He's lamenting this, what he's calling the third generation of computers. We don't use those anymore. I wanted to bring that up. He feels like there was a problem here.

He talks about how as an industry, programmers should have a lot of say in how the computers get designed. He talks about...I'll read it, "I regret that it is not customary for scientific journals in the computing area to publish reviews of newly announced computers, in much the same way, as we review scientific publications. To review machines would be at least as important."

That's interesting. That's a really interesting point, that programmers have to use the computers. At least at that time, they had to use the computers that were available commercially. Why not have programmers review them, and talk about how easy or difficult certain types of software would be to write for them.

That he laments in the previous section about how much they were designed by cost. This is a computer, it works and it's not that expensive. Why is it not expensive? We saved a few transistors by not having a convenient instruction set. [laughs] basically.

All right, "The reason that I have paid the above attention to the hardware scene is because I have the feeling that one of the most important aspects of any computing tool, is its influence on the thinking habits of those who try to use it, and because I have reasons to believe that that influence is many times stronger than is commonly assumed."

He's making this argument that I think was touched on in the reasons for him getting the award. That this perception of problems at the foundation of program design, that the hardware that we use does influence our thinking habits.

Much more than it, "Oh, this is really inconvenient to program for." If you program for it long enough, you will find that your programming has been influenced by that and perhaps in a bad way. We should be careful what machines we use.

He's going to talk about some different projects in the history of computing that he wants to touch on. "In the beginning there was the EDSAC in Cambridge, England and I think it quite impressive that right from the start the notion of a subroutine library played a central role in the design of that machine.

"It is now 25 years later, and the computing scene has changed dramatically, but the notion of basic software is still with us. The notion of the closed subroutine is still one of the key concepts in programming.

"It has survived three generations of computers, and it will survive a few more, because it caters for the implementation of one of our basic patterns of abstraction. Its importance has been underestimated in the design of the third generation computers, in which the great number of explicitly named registers of the arithmetic unit implies a large overhead on the subroutine mechanism."

All right, that was a long thing. He's talking about the EDSAC and how it had a subroutine library. You could write some subroutines and reuse them for different purposes. You're going to calculate this table, you're in calculate that other thing. There's a lot of commonality, common math formulas between them.

Why don't we write those down one time and reuse them. Then they design the hardware to make calling subroutines efficient.

He's talking about how in the third generation, they mess that up. They put all these named registers and you have to save and restore every time you do a subroutine call, and you return from that subroutine. This reminds me a lot of the "Lambda — the Ultimate GOTO" paper that came out just about the same time as this in the '70s.

Talking about how the compilers and program writers at the time believed that to be safe, you had to do all this saving of registers. Calling a subroutine was very expensive. The scheme team showed that in some cases, subroutine calls could be compiled down to a single jump instruction, that you didn't need to save all these registers.

I wonder how that influences this, that perhaps with a with a nice compiler, the problems with this third generation could be papered over. Still, it was thought at the time that the hardware had a large effect, the design of the hardware, the number of registers. What registers had to be saved explicitly was very significant in how performance subroutines ran.

He's bringing that up and I wonder if that holds up after the "Lambda — the Ultimate GOTO" stuff. "The second major development on the software scene that I would like to mention is the birth of Fortran. At that time this was a project of great temerity, and the people responsible for it deserve our great admiration.

It would be absolutely unfair to blame their shortcomings that only became apparent after a decade or so of extensive usage. In retrospect we must rate Fortran as a successful coding technique, but with very few effective aids to conception." He's saying Fortran was important. It was good for its time.

Then he says, "The sooner we can forget that Fortran ever existed, the better. For as a vehicle of thought it is no longer adequate. It wastes our brainpower and it is too risky, and therefore too expensive to use." Footnote here, remember I talked about Dijkstra's famous arrogance? Just stay tuned, because this goes on.

"Fortran's tragic fate has been its wide acceptance, mentally chaining thousands and thousands of programmers to our past mistakes. I pray daily that more of my fellow programmers may find the means of freeing themselves from the curse of compatibility." I broke in there to talk about arrogance.

I feel like this is one of those things that gave him...this ranting gave him this reputation for being arrogant. Arrogance doesn't mean you think you know everything, or you think you're better than other people. What it means is you don't know how to speak in such a way that you are clear in your place. [laughs]

You could ask Dijkstra. Ask him, "Do you think you're better than the Fortran people?" or, "Do you think you know better than someone who likes Fortran?" He might say, "No, it's my opinion. I'm just stating it." He would know. Then when he speaks it sounds like, "Oh man, you really are digging into this thing."

Going on and on...using all of your powers of wordplay and everything to give it to them. What he's saying is that Fortran was important. The Fortran project developed quite a lot of what we would consider programming language theory, compilers, parsing, that kind of stuff. It was hard, took a lot of person-years to do. They were starting from scratch.

They didn't have big machines like we do today, then this curse of compatibility. The problem was it was developed for a particular machine. Then some other set of programmers with a different machine would look at that and say, "Oh that's really nice. Let me program one for our machine." They would program one that was very similar, but different.

If you wrote Fortran code for one machine it was very hard to port it over. He's talking about the curse of compatibility. That is just one of the problems that I think he has with Fortran.

"The third project I would not like to leave unmentioned is a Lisp — a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, Lisp has been the carrier for a considerable number of, in a sense, our most sophisticated computer application.

"Lisp has jokingly been described as the most intelligent way to misuse a computer. That description, a great compliment because it transmits the full flavor of liberation. It has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts." That's interesting.

For one, I'm a Lisper. I like it when it's mentioned. He doesn't go into much about why he likes it, except that it was the language for creative people trying to do really hard things, and having to have crazy language to be able to do that and to give them the freedom to do it.

Also this, he's maybe the third or fourth...I guess you got to count McCarthy, maybe the fifth. I don't remember. I didn't count well. It's common to mention Lisp in these Turing Awards, which is interesting. It's probably the most-cited language, maybe ALGOL is cited just as much.

The fourth project to be mentioned is ALGOL 60. The famous report on the algorithmic language, ALGOL 60, is the fruit of a genuine effort to carry abstraction, a vital step further, and to define in a programming language, an implementation-independent way.

The ALGOL 60 specification — what we call BNF, Backus-Naur — was developed in order to write that specification. Something we still use today, when we're developing grammars. "Only very few documents as short as this have had an equally profound influence on the computing community." This is really important. ALGOL is kind of forgotten, in a lot of ways. No one writes in ALGOL.

What he's saying was important about it, was that it was the first time someone tried to write a specification for a language. The semantics of that language outside of the computer it's going to run on.

Remember, Djikstra is the one who said that computer programming is as much about computers as astronomy is about telescopes. This idea that we should not be developing a Fortran on this one machine, then another Fortran on this machine. Then now we have this problem of compatibility. It would be really cool to be able to write something, a specification.

Then he was the first to implement a compiler, as we saw in the bio. Now, he's also saying how cool it was, that it was so short. That will come up later. "The strength of BNF — that's Backus-Naur form — as a defining device is responsible for what I regard as one of the weaknesses of the language.

"An overelaborate and not too systematic syntax could now be crammed into the confines of very few pages. With a device as powerful as BNF, the report on the algorithmic language, ALGOL 60, should have been much shorter." He's critiquing it. He's saying it gives you too much power.

If you wrote a parser engine that use BNF, you can write a lot of cool stuff, a complicated syntax, and offload all the backtracking and logic and stuff to the compiler itself. Now you've got this complicated syntax. It's a nice syntax for writing this syntax. It's really compact.

Wouldn't it be better if there was some correlation between the complexity of the syntax and the complexity of its expression? Maybe there is. Maybe you could say, "Well, with such a powerful language, you should be able to put the syntax in half a page, instead of a very few pages," like he says. Then he's saying, "It was short, but it could have been shorter."

This echoes something like, Alan Kay, where he was like, "OK, this syntax should fit on a T-shirt syntax of a language." He always brags about how the syntax of small talk could fit on a note card — a three-inch by five-inch card, very interesting that there's that correlation there, that synchronicity.

"Finally, although the subject is not a pleasant one, I must mention PL/I, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/I must be like flying a plane with 7,000 buttons, switches, and handles to manipulate in the cockpit.

"If I have to describe the influence PL/I can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember a lecture given in defense of PL/I by a man who described himself as one of its devoted users. Within a one-hour lecture and praise of PL/I, he managed to ask for the addition of about 50 new features.

"Little supposing that the main source of his problems could very well be that it contained already far too many features, The speaker displayed all the depressing symptoms of addiction." I don't think I'm going to comment on that. It stands for itself.

"So much for the past, I think that we have learned so much, that within a few years programming can be an activity vastly different from what it has been up till now.

"So different, that we had better prepare ourselves for the shock. The vision is that, well before the '70s have run to completion, we shall be able to design and implement the kind of systems that are now straining our programming ability at the expense of only a few percent in man years of what they cost us now.

"Besides that, these systems will be virtually free of bugs." Before we go into his vision, just want to reflect a little bit. It's 2021 and I wonder if we have that now. We have to really use the historical context. We can't look at it from today. In 1972, he says, "The kind of systems that are now straining our programming ability..."

What were they programming in 1972? Not what we would like to program today. Could we write those things, like he said, in an order of magnitude from a less time and virtually free of bugs?

We have to give them the benefit of the doubt there. Because today, our software, how we're working, what's available to us in terms of tooling and open source software libraries that we can use, I mean just the size of the software. The problems we're putting it towards.

Those are very different from in 1972. To give him the benefit of the doubt here, I think we could say that, yes, if we, today...I can't look at the end of the '70s. It's hard to do that. Today, we could write a 1972 program, a problem that they were having trouble with, much faster and almost no bugs. I'm going to say that now.

Why do I say that? This requires a lot more research than I'm actually willing to do to get it to be totally accurate, so take everything I'm saying with a grain of salt. I believe that a lot of the bugs were memory management bugs at this time, in 1972. Also, they were programming in very inelegant languages, let's say.

Today, with the fact that we don't use go tos, thanks to Dijkstra, and our compilers are very reliable and we have good garbage collection with array bounds checks and things that we probably could. We probably could get a 1972 piece of software written, an order of magnitude fast. I think so.

Virtually free of bugs, meaning we don't have any memory leaks or buffer overruns, or the kinds of bugs that they had at that time. I'm going to say that just to be very generous, and say yes. We have that now, at least, in 2021.

I also think that perhaps it's not ambitious enough, his vision, now that I'm putting it in those details. An order of magnitude, he says a few percent in man-years of what they cost us now and virtually free of bugs. Those go together, because debugging is one of the most expensive things that we do.

I don't know if that would solve the software crisis. [laughs] An order of magnitude? We probably needed much more than that, of course, the same problem. I don't know why he didn't foresee this. The same problem that happened before, happens several times over. The kinds of software we want to write.

Our ambitions have grown. We want to write software that has global scale. Imagine something that has hard-scale problems, something like Google or a global network, CDN system, Twitter, or Facebook. Those things have hard problems.

They took a long time to write. They're much harder than what they were doing in the '70s. In terms of the size, the amount of users they have concurrently, they're much more complicated. They're running on thousands of computers all over the world. They have to run these machine learning algorithms. It's the size and scale that we're doing now, compared to 1972, is many orders of magnitude.

A billion times more sophisticated than what they were doing back then. Of course, we're using languages that were designed 30 years ago. That's a problem.

Let's go into what he was talking about. You see what I'm saying with the problem, before we go into that.

There is a problem with his vision here, which is that he didn't foresee this would continue. That as computers got faster and faster, and smaller and smaller, and cheaper and cheaper, our ambitions would grow and outpace what we had available at the time.He didn't foresee that that would continue, even though he brought it up in this talk. That's interesting.

I think that he has an interesting vision. I don't quite agree with it, [laughs] but we'll read it. "Those who want reliable software, will discover that they must find means of avoiding the majority of bugs to start with. As a result, the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging.

"They should not introduce the bugs to start." I agree with that. It's much cheaper to not introduce the bugs to start with. Do we have a good idea of how to do that? I don't know if we do. Certainly not in a way that makes it cheaper. Perhaps, he couldn't see the bugs people were dealing with in 1972 would be different from the bugs we're dealing with today.

We've opened new types of bugs that were rare back in that time. We've solved a lot of the bugs that were common then. I mentioned some before, buffer overruns, memory leaks, memory access errors. These are not that common anymore.

If you're using languages like Java, JavaScript, or C#, these languages don't have these problems that much. If you're using a language from this time like C, you do have those problems. We're still paying for that.

I think largely we've eliminated the kinds of bugs that he might have been thinking about. There are still bugs of logic, there are still bugs of design, meaning I programmed what I wanted correctly but I wanted the wrong thing. [laughs] I didn't realize that that would do that. It does what I wanted, but it's not the right thing.

Then, of course, there are bugs that have to do with the sheer complexity of the software, the distributed system bug, that I don't know if they were dealing with at that time. "There seemed to be three major conditions that must be fulfilled. The world at large must recognize the need for the change.

"Secondly, the economic need for it must be sufficiently strong. Thirdly, the change must be technically feasible." I just want to emphasize again, I read his vision, that was it. That we would somehow avoid bugs to begin with, and this would give us that order of magnitude, faster software with virtually no bugs. That was it.

He's got these three things for why he thinks a revolution would happen before the end of the '70s. The world at large must recognize the need for the change, the economic need for it must be strong and the change must be technically feasible. He's going to go over those and argue for them.

"Only a few years ago, to talk about a software crisis was blasphemy. The turning point was the Conference on Software Engineering in Garmisch, October 1968, a conference that created a sensation, as there occurred the first open admission of the software crisis. Our first condition seems to be satisfied."

That might be an interesting paper to read, the Conference on Software Engineering Report, where the idea of the software crisis was first openly talked about. Of course the software crisis, we don't talk about it so much anymore, but what it meant was there were all these projects that were overtime and over budget.

Some of them just failed completely to materialize after spending millions of dollars. It was so common. They were calling it, "The Software Crisis." Basically, we don't know how to build software. That's what they were saying.

Nowadays, one often encounters the opinion that in the '60s programming has been an overpaid profession and that in the coming years, programmer salaries may be expected to go down. He's talking about the economic argument now. He says that the 1968 conference, which happened four years before this, opened the doors.

People are now openly admitting that we don't know how to write software on-time and on-budget. It's open, doors are open, we believe it. We have the first condition, the recognize the need for the change. Now, the economic argument is number two. People are talking about programmers being overpaid and that their salaries may go down.

Certainly from now, programmers are really highly paid now. From 1972 to now, 50 years later, 49 years later, it doesn't seem to have happened. I don't know if there was a dip or something, but that's interesting. I was very shocked when I read that. He's going to talk about this opinion. "Perhaps the programmers of the past decade have not done so good a job as they should have done.

"Society is getting dissatisfied with the performance of programmers and of their products." He's trying to explain why people think the programming salary should go down. Now he's going to try to argue for why he thinks it might.

"The price to be paid for the development of the software is of the same order of magnitude as the price of the hardware needed, but hardware manufacturers tell us that in the next decade, hardware prices can be expected to drop with a factor of 10. If software development were to continue to be the same clumsy and expensive process as it is now, things would get completely out of balance.

"You cannot expect society to accept this and therefore, we must learn to program an order of magnitude more effectively." Maybe, from 1972, that argument makes sense but looking back, it just didn't. It just did not play out. I think that he failed to do the projections. The projections to my mind, the important ones are that as computers get cheaper, more people will want them.

There is more of a market for software. Therefore, you can spend more developing the software, because you have more people to sell it to. He did not foresee what was happening at that time, but it was still not big that there was a software industry growing.

That it wasn't, "We are a company. We have $10 million, let's spend $5 million on piece of hardware and then $5 million to develop our own custom software for it." What happened in the '70s was the personal computer came out. Now it's cheap enough, few thousand dollars, for a small company to buy it.

Now they need software for it, and they can't afford custom software because big companies are spending five million dollars on that. So what happens? People develop a software industry, a software market where you write the software once. It's a general purpose piece of software like a spreadsheet. Then you sell it to everyone who has a personal computer.

That could happen in many computers, not just microcomputers as well. He'd failed to foresee there was a shift from custom software per company to reselling the same software, recouping the cost of building the software by having a larger market to sell to.

That continues today. We have now software as a service and things like that, where a company will pay $100, $1,000 a month for their software depending on the software. It costs millions to make, because those companies can resell to lots of people.

He was wrong. I mean, just looking back he was wrong. "Third condition, is it technically feasible? I think it might be. I shall give you six arguments in support of that opinion.

"A study of program structure has revealed that programs, even alternative programs for the same task, and with the same mathematical content, can differ tremendously in their intellectual manageability.

"A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program." I actually think that this third technically feasible is the more interesting argument.

Not like because it has a chance of being right. It gets deeper into his thinking about what software, how software should be developed. This idea of intellectual manageability is key. That you have to keep software manageable in your mind.

Otherwise, that's the cause of the software crisis, the cause of bugs and everything. He's talking about these rules, that there has been study of how people program and they found that well if you stick to these rules, you can still solve the problem, but you keep it simpler, you keep it easier to keep in your mind. These rules are of two kinds.

Those are the first kind are easily imposed mechanically, by a suitably chosen programming language. Examples are the exclusion of go to statements and of procedures with more than one output parameter. Excluding go to statements and excluding procedures with more than one output parameter.

Meaning you just have the return value, what we consider today, the return of a procedure instead of setting a mutation. It's talking about a functional style. That's the first kind if you can enforce it in the language. "For those of the second kind, I see no way of imposing them mechanically, as it seems to need some automatic theorem prover, for which I have no existence proof.

"Therefore, for the time being, and perhaps forever, the rules of the second kind present themselves as elements of discipline required from the programmer. Some of the rules I have in mind are so clear that they can be taught, and that there never need to be an argument as to whether a given program violates them or not."

Maybe the computer can check them. He doesn't have that theorem prover he talks about, but another person could check them and know clearly whether the rules were violated or not.

"Examples are the requirements that no loop should be written down without providing a proof for termination, or without stating the relation whose invariants will not be destroyed by the execution of the repeatable statement."

He's saying, here's a rule, before you write a loop, write a little proof that shows or maybe not before, but alongside this loop, prove that it will terminate. Just a little informal proof that shows like yeah, this isn't going to go in an infinite loop. You got to write it down. That's the rule.

[pause]

Eric: Is there a way to make a theorem prover that does that? It just comes to mind. I think that there are some people have shown that Idris can detect infinite loops. That would be interesting. Maybe that's the kind of thing he had in mind for a theorem prover, I'm not sure.

There's also stuff like TLA plus that has theorem prover outside the language, maybe something like that would be good.

"I now suggest that we confine ourselves to the design and implementation of intellectually manageable programs." He says, just like before he said that there's these rules, that if you follow them, they keep your program intellectually manageable. He gave a couple of examples, no go tos. Then you have to write a proof for your loops that they're going to terminate.

If you follow these rules, you will confine yourself to intellectually manageable programs. "If someone fears that this restriction is so severe that we cannot live with it, I can reassure him or her. The class of intellectually manageable programs is still sufficiently rich to contain many very realistic programs for any problem capable of algorithmic solution.

"We must not forget that it is not our business to make programs. It is our business to design classes of computations that will display a desired behavior. The suggestion of confining ourselves to intellectually manageable programs is the basis for the first two of my announced six arguments."

This is background. He's making this claim that we should restrict ourselves to intellectually manageable programs. Even with that restriction, we can still solve any problem that we need to, and there's still variation within it. It's still rich enough that we can find different solutions to that same problem, in case we have to optimize or whatever.

He makes this weird claim, which I think takes a little bit of rereading and etc. "We must not forget that it is not our business to make programs, it is our business to design classes of computations that will display a desired behavior." Maybe this is a misnomer, then that we're not programmers, we are designing classes of computation that display a desired behavior.

The program is just the artifact that we have to create, in order to manifest that class of computation. You give a program to the computer and it makes a computation. We're designing that. That's where we're designing that computation.

It's an interesting way to look at it. It's definitely different from some of the other things I've read on this podcast. What comes to mind is in the stratified design paper that they were talking about how programs were made for humans to read. It's communication between people probably on a team.

Only secondarily for the computer to run it, which is contradicting what he's saying. It's very interesting that there's this contradiction here.

We're going to go over these six arguments, one at a time, they're in order. Here we go. "Argument one is that as the programmer only needs to consider intellectually manageable programs, the alternatives he or she is choosing from are much, much easier to cope with." Basically, they're easier programs.

They're not of the hard variety. They are the easy variety, because they are intellectually manageable. That's fairly simple statement, doesn't need much more than that.

How does that relate to what he's trying to prove, trying to argue for? They're easier programs therefore easier write, fewer bugs. That's what he say, easier to write, fewer bugs.

"Argument two is that as soon as we have decided to restrict ourselves to the subset of the intellectually manageable programs, we have achieved, once and for all, a drastic reduction of the solutions based to be considered. This argument is distinct from argument one."

When I first read it I was like, "That's not distinct, that's the same thing. He's saying there's just fewer program that we have to do, fewer choices, fewer alternatives. The solution space is smaller. He's saying that this means less time, fewer bugs.

Argument three, "Today, a usual technique is to make a program and then to test it. Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness.

"One should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer's burden. On the contrary, the programmer should let correctness proof and program grow hand in hand.

"If one first asks oneself what the structure of a convincing proof would be and having found this, then constructs a program with satisfying this proof's requirements, then this correctness concerns turn out to be a very effective heuristic guidance." There's a lot to talk about in here. This is a big, big statement here.

First two are easy, first argument, easier programs, second argument, fewer programs. This one is saying that if you write the proof and the program hand in hand, your job is easier. You will have fewer bugs and it will be faster. Let's break this down.

He says that a usual technique is to make a program and then test it, but you need to write a proof. You can't know if it it's correct. This kind of contradicts a quote by Knuth. It was a note that he wrote to a colleague about a program.

He said, "Be careful of this code, I have only proven it correct but I have not tested it." [laughs] I don't know about you but in my experience, proving something correct is not enough. You need proof and test. If you wanted to go that extra mile to prove it, I often don't prove it, to be honest, but I will test and I have found when I tried to prove, I still need to run it to be sure.

I think that he's showing a bias that somehow you can just prove everything correct. Not a bias, it's his opinion. I don't agree with it. It just doesn't ring true with me that that's even feasible, that you don't need to test, basically. I do like the idea of proving alongside writing the code.

I think that just like TDD has an effect on how you program, like the resulting code will be different if you do like test, code, test, code, test, code. What if you did proof, code, proof code, proof code? He's saying, because you have to prove it, that's going to have an influence on you.

Like he's saying this, "If one first asks oneself what the structure of a convincing proof would be and having found this, then construct the program satisfying this proof's requirements, then this correctness concerns turn out to be a very effective heuristic guidance." He's saying it's very often easier to think in terms of proofs and then write the program.

There's a very interesting lecture that I hope I can find it again [Update: I found it.]. I watched it years ago, probably on YouTube, or maybe earlier than that, where he was showing how to solve problems, puzzles, using his technique. One of the puzzles he showed was what's called the wolf, sheep, cabbage problem. It's got other names too.

The idea is you're on one side of the river, and you have to cross the river, but the boat is not big enough to carry your wolf, your sheep, and your cabbage across all at the same time. In fact, you can only fit one of them. You have to get everything across without anything eating the other thing.

The sheep is going to eat the cabbage, if you leave them alone together, and the wolf is going to eat the sheep. It's a common problem that we all face every day. Like our sheep are going to eat cabbages, how do we deal with that? We need software to solve this problem. He showed in his lecture, through his method, that what would a proof look like.

He showed that, look, one thing that we can do is simplify this proof. It looks like there's three different things — there's a sheep, there's a wolf, there's the cabbage. Really what we're saying is the sheep that's the different one, because the sheep can't be alone with the wolf, and the sheep can't be alone with the cabbage.

The wolf will not eat the cabbage, so they can be alone together. We can call those alpha. Wolf is an alpha and the cabbage is alpha. This third thing, we'll call it something else. We'll call it beta.

Now the constraint is we cannot have an alpha and a beta together alone. We can have two alphas alone, but we can't have an alpha and a beta. Now we have this constraint that's very clear and mathematically workable, that allows us to now proceed with a method for crossing the river, getting everything across.

I think that that's what he's trying to get at here, that if you start with the proof, and figure out what would it even look like, if I could prove that the software will always work.

It's a modeling step, and then you can just program that model. I think there's something to that. Argument four, "The amount of intellectual effort needed to design a program depends on the program length.

"We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad of cases is called abstraction. As a result, the effective exploitation of his or her powers of abstraction must be regarded as one of the most vital activities of a competent programmer.

"In this connection, it might be worthwhile to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise." He's setting this up and I want to stop here. That's one of those quotes that everyone loves from Dijkstra.

I'll read it again, "The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise."

He's talking about levels of abstraction, a lot of cool ideas in here. He's saying the only thing that we can do, the only thing we know that helps us with the mental burden of keeping a program in our memory or in our mind, is abstraction.

By suitable application of our powers of abstraction, the intellectual effort required to conceive or to understand a program need not grow more than proportional to program length.

Remember, he's saying that the amount of intellectual effort needed to design a program depends on the program length. He's saying that if you use abstraction, the length of the program, number of lines, I guess, number of tokens in the program, that with good abstraction, the amount of intellectual effort it takes is proportional to program length.

Program grows a little bit. We have to use a little bit more mental power. OK, it's not done yet this argument. "The identification of a number of patterns of abstraction that play a vital role in the whole process of composing programs." He's identified these, or people have identified them.

"Enough is known about these patterns of abstraction that you could devote a lecture to each of them." which is an excuse for not listing them here and talking about them. "What the familiarity and conscious knowledge of these patterns of abstraction imply, dawned upon me when I realized that had they been common knowledge 15 years ago.

"The step from BNF to syntax-directed compilers, for instance, could have taken a few minutes instead of a few years." Let's talk about this one. First, I'll explain it. He's saying, Over time, we've come to understand and identified a number of patterns of abstraction that play a vital role in programming, patterns of abstraction.

We know enough about these patterns, that they're each sophisticated, but they're learnable. If you're familiar with them, it's faster. You can make leaps. Remember in the bio, this is one example, he was writing ALGOL 60 came up with the idea of using a stack to manage the recursive function calls that were possible in ALGOL.

The stack, that is a pattern of abstraction. What if we had known that 15 years ago? It was more than 15 years ago, but imagine in 1972. What if we had known that when we were developing Fortran? What if we had all the compiler technology we have today, couldn't we write ALGOL in a weekend?

This is important. We have the software and the tools. We already have BNF compilers or BNF engines that can read BNF and then generate a parser that can read that language. We have those. We have the software and you can just npm-install it. It's real easy.

He's not talking about that. He's talking about ideas, the abstractions themselves. For instance, you can go from BNF, you can write a thing that doesn't just pars it, but it actually compiles it in one step — syntax-directed compiler. He's saying that we've accumulated enough of these, and we're still accumulating them.

If you're familiar with them, the problems that you face are easy, you can code them faster. I wish he would've listed more of these here. I suppose that they're in his books and other publications that he's written. It'd be nice to see them.

The wolf-sheep-cabbage idea that I talked about is one of them, this idea of abstracting away the wolf-sheep-cabbage, and turning them into alpha-beta-alpha. Now, you can create a constraint that directs the solution, it's very clear once you have that constraint. The solution is, "I have to bring the sheep across because it's a beta, and I can leave him alone."

I'll put that sheep on the other side, and I'll come back and then I'll get an alpha. I can't leave the two alone, so I have to take the beta back. Now I can take the alpha and leave the beta alone. See, it just flows naturally. It becomes very obvious.

I wish that we had more of these patterns of abstraction. I don't think that he's talking about patterns in the design-pattern sense, in the Christopher Alexander sense. I don't think he has ever mentioned that lineage of ideas that came from Christopher Alexander's architecture. He's not doing that.

He's just saying patterns, in a general way, undefined. That's the fourth argument, saying that we have these patterns that make coding faster.

Fifth argument, patterns of abstraction, stuff like we know to use a stack. We know to use this particular data structure for these problems. Fifth argument, "The tools we are trying to use, and the language or notation we are using to express or record our thoughts, are the major factors determining what we can think or express at all.

"The analysis of the influence that programming languages have on the thinking habits of their users, and the recognition that by now brain power is by far our scarcest resource. These together give us a new collection of yardsticks for comparing the relative merits of various programming languages." There is a lot to say, so I'll get started.

First, I'll explain what he says. Programming languages have an effect on how we think, a big effect, one of the major factors. That we can, perhaps...I think he's not saying, perhaps...We can compare the merits of programming languages simply based on that. Like how much brain power does it take to use this language, or use this feature from this language.

That's cool. I don't know if we're doing that when we are comparing programming languages in terms of if you were to review. "Well, this language has this feature. It lets you do this. Isn't this cool?" We're not saying, "But man, I tried it when I was really tired, and I couldn't get it working," That's interesting.

To create a yardstick of that like, "How many different files do you have to look at to understand how a thing works? How many keywords are there that you have to understand?" A simpler language would have fewer keywords. Therefore, maybe it would be easier to understand. There's some metrics that you could probably put on there.

He continues with this fifth argument. "I see a great future for very systematic and very modest programming language. Not only ALGOL 60 for clause, but even Fortran's Do loop may find themselves thrown out as being too baroque." This is a call for simplicity.

It sounds hard to argue with that simple languages are easier to work with, and will make you develop faster. At the same time, it seems like most languages are going the other way of the adding features. I was helping someone with doing modern Java, Java 8 Stream and stuff. I don't know. Does it make the language better to have all that stuff?

I don't know. The jury is out. My opinion is probably not. It's more stuff to learn, it's more complicated. He's talking about the For loop, and the Do loop as being too complicated. I appreciate that, especially, I'm a Lisper. Lisp is a very simple language compared to other languages.

Also, it rings true with what Alan Kay talks about with the syntax fitting on one notecard. I don't think he's alone here in saying that. He talks about this informal study that they talked about. He's not just talking about simpler, less baroque, all these little things you have to get right to make this work, which is like a For loop.

Like a JavaScript For loop where you have to declare a variable, initialize it, and then put a semicolon. Then, you have this condition under which the loop continues to run. Then, you have another semicolon. Then you have a thing that runs for every iteration. It's weird. It's like all these little things you have to learn.

He's not talking about simpler is better. He's got another deeper idea here. In this informal study, he got a bunch of experienced programmers, volunteers, to code a solution. This particular problem, they couldn't solve. He's going to talk about them.

"Their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up that they were mentally blocked from seeing the obvious." This is what he's saying here, the languages that they used, loops were always about stepping up a variable as an index into an array.

They had done that so much that they could not see another obvious solution, which he doesn't explain. It would be a much simpler solution if you used the loop to not increment a variable as an array index. They were so entrained, so used to doing that. Remember, he's talking about a language influences your habits of thought.

They couldn't see the simple solution. By special casing the For loop. It's a special case of loop where you have these initialized variables, but you don't have a variable to initialize. People are now being trained, through repetition, that a loop needs a variable. It usually starts at zero. We're going to increment it each time.

Doesn't i++, isn't that what you have to write i++ at the end of your For loop. It takes a while to undo that. I feel this that there's a lot of unlearning that I've had to do as I get better at programming. It's that all these habits, and maybe they are crutches that helped me get further with less experience with my limited experience at that time.

Now I don't need to do that but I can write these different loops that don't have variables and whatever. Anyway, he's arguing that the problem is not just that the language is more complicated, but the particular choices that the languages have made, have ruined programmers from seeing obvious solutions.

Maybe it's because they thought the For loop it will make this easy, this one thing, but then it makes other things hard, because he can get used to writing in that way.

It's not just we need simpler languages, he is saying that we need simpler languages, but that the languages have to be...we have to throw out notions of special casing and things like that. Those are complications, but those complications have real consequences on how people program.

"Programming will remain very difficult because once we have freed ourselves from the circumstantial cumbersomeness, we will find ourselves free to tackle the problems that are now well beyond our programming capacity." OK. He does say, ''Yes, they are going to get harder.''

"We're going to tackle programs beyond our programming capacity." It is going to stay hard. He's kind of contradicting himself. He did not include that in his original vision. OK. Six argument, again, these are all arguments for programming is going to get faster and fewer bugs.

"Up till now, I have not mentioned the word hierarchy. But I think that it is fair to say that this is a key concept for all systems embodying a nicely factored solution. The only problems that we can really solve in a satisfactory manner are those that finally admit a nicely factored solution.

"The best way to learn to live with our limitations is to know them by the time that we are sufficiently modest to try factored solutions only, because the other efforts escape our intellectual grip. We shall do our utmost to avoid all those interfaces impairing our ability to factor the system in a helpful way."

All right, this is a lot. I'll have to go into it. He's talking about hierarchy. He is saying that it is a key pattern for how we should structure our abstractions and our solutions. By hierarchy he means we stratified design that we can build on top of the existing solutions, and then build on top of those, and then build on top of those.

The only problems we can really solve in a satisfactory manner are those that finally admit a nicely factored solution, so he's saying like, the ones that we can actually solve in this way that he's been talking about of...you know, intellectually manageable are the ones that use hierarchy. They use abstraction in this particular pattern and those are the ones that work.

He's talking about...yes, it's a constraint that we are setting that we were going to work within these hierarchical programs, but that's a limitation of our ourselves, our brains, that's how we work, that's how we understand things, and that that it's just a limitation we have to learn to live with, and the sooner we learn to live with it, [laughs] the better.

That's his final argument that we will finally accept that we have to make hierarchical solutions, and that that's sufficient, like it's enough. We'll be able to solve all of our problems anyway using hierarchical solution, so that's all six arguments.

I think I want to go through them one more time, so there is easier programs. We're going to write easier programs, because they're intellectually manageable. We're going to write fewer programs, because not all programs are intellectually manageable.

Not write fewer programs, I mean fewer programs to choose from. If you write your proof and program hand in hand, it makes the programming easier, and, of course, fewer bugs because you've written a proof.

Argument four is that we have these patterns of abstraction that if you know them make programming faster. Fifthly, we can converge on simple languages that will help us see more obvious solutions. Then fifth, we will finally accept that we prefer or we need a higher hierarchical solution that will help us write programs faster.

OK, now he's going to conclude. "Let me conclude. Automatic computers have now been with us for a quarter of a century, but they have had a great impact on our society in their capacity of tools."

"But in that capacity their influence will be but a ripple on the surface of our culture compared with a much more profound influence they will have in their capacity of intellectual challenge, which will be without precedent in the cultural history of mankind."

He is making some very flowery pronouncements here. He's saying that yes, they're important as tools, but that they will influence our intellectual culture more.

"Hierarchical systems seem to have the property that something considered as an undivided entity on one level is considered as a composite object on the next lower level of greater detail." A wall is made of bricks, and the bricks is made of crystals, and the crystals are made of molecules. That's how we understand it.

This is a continuation of argument six. He's going to go further. "The number of levels that can be distinguished meaningfully in a hierarchical system is kind of proportional to the logarithm of the ratio between the largest and the smallest grain." I'm going to stop here and explain this, because he talked about it weird.

When you make a hierarchy like that, you've got the smallest one at the bottom. The crystals made of molecules, the molecules are really tiny, then the bigger thing is the wall. The number of levels that you can actually find in there, you don't want to go from molecule to wall. You don't want to build a wall out of molecules. That's too hard.

You have to place each molecule...no. There's a certain number of levels that makes sense. The number of levels is proportional to the ratio of the size of the big thing to the size of the little, the logarithm of that.

You say, "Well, a wall is so much mass, and a molecule is so much mass so there's like 30 [laughs] orders of magnitude, 10 to the 30 difference in that proportion. Then you take the logarithm of that which is 30.

You couldn't imagine there being more than 30 levels between. Unless the ratio is very large, you can't expect many levels. He's not speaking exactly precisely you can actually count the number of appropriate levels, he's just speaking in general that a hierarchy implies a tree and the tree implies logarithmic growth of the number of levels.

"In computer programming, our basic building block has an associated time grain of less than a microsecond, but our programs may take hours of computation time. I do not know of any other technology covering a ratio of..." I can't read it. "...10^10..." The superscript is small. ..."or more."

"The computer by virtue of its fantastic speed seems to be the first to provide us with an environment where highly hierarchical artifacts are both possible and necessary." All right, let's talk about this again. He's saying, our smallest operation, one ad let's say, takes less than a microsecond. Today, it's like nanosecond. Then, our programs run for hours.

They're doing trillions of ad in that time, more than that. You say, hours of programming, or hours of computation, where the smallest one is a nanosecond, the ratio of that is really big. He says 10^10 but I'm sure it's probably more. That — it's a deep tree — affords more levels. That's what he calls highly hierarchical artifact, they're possible and necessary.

"The challenge is so unique, that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation, it should give us better control over the task of organizing our thoughts. If it did not do so — to my taste — we should not deserve the computer at all." This is a very grand statement and something I agree with.

It teaches us a lot about ourselves. There's a lot to learn about understanding, how we design things, and build things. How can we control these tiny little ads, and somehow turn it into something meaningful to us at a level of where we're thinking in terms of seconds, minutes. It's interesting. Now I'm going to reread the last bit that I read in the introduction.

"We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers."

We finally have the title which is called, "The Humble Programmer." It sums it up nicely. He's basically saying that we need to realize that we are not that smart.

The main way of being able to write better software is to make our software simpler, make our tools simpler, do a little more proof, maybe a lot more proofs, use hierarchies and tools of abstraction that have proven themselves in other fields, and not try to do something too complicated. [laughs]

Take a step back and just work on the stuff that we can deal with. I can't say I disagree, but I don't know if that's going to solve the problem. [laughs] I can't disagree that it's useful to...Let me put it this way. Why do we write tests? Because we're not that smart, we're not that good at programming. We can't blast out the correct program the first time.

Recognizing that we write a test first, to make sure that we understand what we're going to code, we've limited the problems, so that when we start one test and encode it. This is all basic TDD stuff. We have TDD, we still have things like [inaudible 122:21] , we still have [inaudible 122:24]. One of the things that I was thinking about mentioning is before he talked about how our main [inaudible 122:42]. Let me read it.

"The analysis of the influence that programming languages have on the thinking habits of their users, and the recognition that by now, brain power is by far our scarcest resource, these together give us a new collection of yardsticks for comparing the relative merits of various programming languages. To explain this again, he's saying that our scarcest resource is our brain power.

That that is the bottleneck on programming, to program more faster, we need to make sure that the problems we're solving can squeeze through that bottleneck — this tiny brain that we got — we have to make the program squeeze through that. I think that's right. I also think things might have changed some in this time since then.

Programming software is built by teams of people over a long period of time. There's a growing recognition that it's not just individual programmer's brain time, or brain capacity, that's the bottleneck. There's also a bottleneck of communication between programs. Hiring a person is very expensive. You hire them to your team, they have to learn an entire code base.

They have to learn your practices, your processes. All these things are very important. It's not just, "Oh, this is a big thing to squeeze through their little brain."

We all have little brains. They're new, and so you have to squeeze it all into their brain. It's not just that. Now, you have to communicate with them and coordinate and set up, "I'll work on this while you work on that."

They're going to have to talk to each other at some point. We are dealing with this multi-person mega brain, overbrain, overmind of everyone on the team, and we have to consider our languages with that in mind, our languages, our tools, our processes, everything that we do.

I agree with this one way to...the bottleneck, the constraint on how fast a person can program is how much they can get in their head at one time. Let's squeeze it down, simpler languages, simpler ways of programming. "Intellectually manageable," he calls them, just squeeze it all down in right proofs that will help shrink down how it works.

Now we can go through my head and I can write it correctly, now bring another person. Our communication, inside my head, the cells are communicating pretty good, but now they have to communicate with the cells in someone else's head.

That's another maybe a harder constraint on it, because we're building software that we can't build alone. Not given the timeline we have, we need to ship faster, so we needed more people.

We have to somehow all communicate maybe through the code, through the artifact of the code. That should be a factor, is what I'm saying. I think that he's got interesting arguments. I don't think that they have come to pass.

I don't think that we have significantly simpler programs, programming language. I don't think we write proofs. I think that there is another factor that maybe he didn't take into account which is tooling. Tooling can potentially expand our intellectual capacity as a machine-person symbiosis.

Just thinking for instance, I don't have to keep the code in my mind. I can do it jump to definition, or the compiler can catch certain errors that I might make. Something like that we're passing off a lot of what used to have to be done thinking into our machine, and I agree with that I think we should pass off more of that.

That was not part of his argument at all. There's no tooling. He wasn't talking about tooling. That's interesting. Simpler language with a very powerful tool, wow, that's sounds like even more number of orders of magnitude that could be gained here.

This kind of talk about reducing the cost of software reminds me a lot of the "No Silver Bullet" essay by Fred Brooks that talks about how at a certain point, probably around this time, he feels like there won't be another order of magnitude increase. A lot of the time that was spent, of course, Dijkstra does not do a time analysis.

I don't know how much actual software he wrote, [laughs] Dijkstra that is, meaning commercial software. I think a lot of it was academic programs, where it has to calculate the right answer. It doesn't have to run reliably on hardware, interacting with humans and stuff like that.

Maybe, I don't know. I didn't do that research. What Fred Brooks argued was that a lot of the time in programming was, basically, waiting. You had to submit your program as a batch job, you give it to a queue and you get a print out later.

Maybe days later, and you find that it didn't work. You had this debugging you had to do. You had to read the program again, find the bug. Now, you think you got it. OK, double check, all right, now, submit it again.

You go through the queue again, etc. By the time he wrote that No Silver Bullet essay, that time had been reduced. Especially now, every programmer has at least one computer. It's common practice.

You are not sharing mainframe. You don't have to do that waiting. You can know very quickly, within seconds, whether your program ran correctly.

He was saying we produced all this time, I don't think we can get another 10x out of it. Dijkstra argues that we can. It has to be done through simpler languages, a discipline to only accept simple solutions, these hierarchies, these patterns of abstraction.

We'll see. I don't know if we've proven that. I don't know if we even know what these patterns of abstraction he was talking about are about. If a person knows the pattern of abstraction, they have to be smarter than you think to understand that pattern of abstraction, or more experienced than you think.

We have to consider that we are humble, maybe a [laughs] humble programmer cannot understand those. We are talking about this bottleneck on programming, that is our brainpower. Perhaps your bottleneck needs to be a little bit bigger before you can even admit the abstraction patterns he was talking about.

Maybe it has to be bigger before you can even realize whether your solution is intellectually manageable. Maybe it's just experience, that is not something you can teach at the beginning. For instance, there's a difference between a senior and a junior engineer. That difference is maybe the senior engineer says, "Hey, that sounds like a really complicated solution to what should be a simple problem."

The junior engineer doesn't see that problem. They don't see that it's complicated. Simplicity and elegance is something that takes time and experience to learn. We can't just say, "Be humble." That is the problem. You might say I have to be humble, and this is a complicated solution but I don't know what else to do. [laughs]

I don't know how to make this simpler. That's all I have to say about this, I really enjoyed this. Dijkstra is a very interesting character. I suggest you watch some of the recorded talks, or lectures he's given and you can find on it YouTube.

Very interesting person, very opinionated, and also brilliant, I mean, obviously, he invented all stuff that we take for granted today. I enjoyed this. I enjoyed thinking through it, I wish that he maybe did have a better answer than what he gave, which is we need to...the main problem is that we're, we think we're too smart. [laughs]

We think we're smarter than we are. If we recognize that we were not that smart, that we would use easier techniques. That doesn't seem to be true. That might be something that you get through to as you get older and more mature and more experienced. I was overcomplicating it in my youth. It's really simple.

Then you try to show a youth that solution they're like, "How did you get there?" I don't know. I see that it works but like, "I could never do that". Thank you so much for listening. My name is Eric Normand. As always rock on.