What is lazy evaluation?

Lazy evaluation is a common technique in functional programming for separating two concerns: how you generate a value from whether/when you actually generate it. We look at the two different kinds of laziness and the benefits it gives you.

Transcript

Eric Normand: What is lazy evaluation? By the end of this episode, you'll know the two kinds of laziness and how they can help you optimize your code. My name is Eric Normand and I help people thrive with functional programming.

Lazy evaluation is a common technique in functional languages. It doesn't have to be just for functional languages, but often it is used because functional languages have a nice referential transparency property. That means that you can do the lazy evaluation optimization and it doesn't affect anything. It doesn't have any side effects, basically.

If you don't have side effects, it's much easier to do lazy evaluation. It's not only for that. There is a "Referential Transparency" episode and also a "Side Effects" episode. Go watch those. Listen to those if you are unfamiliar with those terms.

What is lazy evaluation, first of all? Basically, it's don't evaluate what you never need to. If you never need a value, why are you running the expression?

Why are you running that code that generates the value if you're never going to need it? Then once you do evaluate it and you need it two times, don't evaluate it a second time. Just save the answer and use it again.

Another way to put it is if you have some code, you either run it zero times, like you don't run it, or you run it one time if you do need it. It's an optimization. You don't run stuff you don't need, and then when you do run it, you only run it once.

It's also called call-by-need because you only call it if you need it. That's another term that you might hear somewhere. It's sometimes called delayed evaluation because you're not running it where it is in the program, like where the line of code is. It's not like you're executing them straight through.

If it's lazy, you only are going to execute that when you need it. When your program proves that it needs that value, that's when it's gets run, so it's delayed. The evaluation doesn't happen here, it happens sometime later when you actually need it.

The opposite of it is strict or eager evaluation. Strict meaning it's just very strict rule. Like just evaluate one thing at a time right down in order. Eager, this is the opposite of lazy. Evaluate everything, whether we need it are not. It's eager.

You could do a double negative, I guess, that lazy evaluation is non-strict evaluation. It's a type of non-strict evaluation. Anyway, those are all the terms.

Why do you do this? Because it reduces the amount of code that is executed. Also, sometimes the amount of memory that is used.

You can imagine that if you have some code, let's say at the top, you initialize a variable to some complicated expression at the top, then you have an if statement. In one branch, you use the value and the other branch you don't use the value.

In that other branch you never needed to compute it in the first place. Why are you computing it? Well, you could say, "Well, I should go through and move this variable into the branch where I'm going to use it." That would work, yes in that one case. It's manual labor. You have to do that yourself as the programmer.

What if you had a thing that didn't run that unless you needed it, but you could put it wherever you want it to? As the programmer, you can make it clear and express it exactly how you think it would read well. Then you have the compiler or the programming language, the runtime, figure out when to run them so that if you don't need it, it never gets run.

That's what it is. You're separating how to generate the value — the code that you have to type in that generates that value — from when or whether you run it because you might not need to run that code. It's a separation of concerns here. You're separating two different things.

Besides not using their memory if you don't need to add value and not running the code if you don't need the value, they also let you do infinite data structures. Infinite sequences, infinite trees, because you can say how to generate the tree or how to generate the sequence separate from whether you actually generate it.

Whether you need that branch of the tree that goes to infinity because you can separate those out. This tree is virtually infinite. Any branch you go down will never end. Obviously, you can't actually generate that because you would run out of memory and it would take infinite time, but as long as you still keep walking down the tree, you can still find new nodes.

It separates out those two things and lets you be a little bit more fluid with how you do it.

Now, also you can use laziness to generate just a really long sequences that maybe take a lot of time to generate. You don't want to generate the whole thing at first. You just want to generate one thing at a time or one chunk at a time.

One thing I use lazy sequences for in Clojure is I can request pages from the database.

Let's say I have a table that's like 10 million rows. I want to get them 100 at a time, but I don't want to have the logic of how to request the next page — like adding some sequel stuff like limit, and offset, and stuff like that — I just want a sequence that has all of them in it, but it's a lazy concatenation of chunks of 100.

I'm able to make a sequence that does fetch all of them eventually, but then where I consume that sequence, I can just stop when I don't need any more. I don't actually make all the request to the database, only the ones I need. You're separating things out. It's actually a pretty cool little technique.

There are two types of lazy evaluation. One is explicit and one is implicit. Explicit means that you, the programmer, have control of which expression to delay, which expression is going to be lazy. Usually this is called delay. You call delay and you give it some code. Then that returns a delayed value.

You can somehow call a method on it or call some function on it to tell it, "OK, I need the value now so give it to me. Compute it and give it to me." Or "If you've already computed it, just give me the cast version."

Implicit means that your language just does it for you without telling you or you don't have to say anything. For example, Haskell has this. It is pervasively lazy. All the code in Haskell is lazy, just about.

That means that you never have to worry about delaying something and wondering, "Oh then this branch, it might not get run, so I shouldn't put it here." You never think about that. It's the whole language is lazy.

Clojure sequences are lazy. If you call map on a sequence, you will get a lazy sequence back. Nothing actually happens except creating the delay. [laughs]

Creating the lazy sequence, just the one pointer to like, "OK, here's where to start." That gets created when you call map. Then when you actually consume the results of the map, that is when the work happens and the sequence is generated. That's another example of implicit.

The rest of the language is not lazy like Haskell is. Haskell is lazy, Clojure is not, but sequences in Clojure are.

All right, I'm going to recap and then we'll talk about what's happening in the next episode.

Laziness means you don't evaluate what you never need to. You can imagine that this saves a lot of computing, a lot of time, a lot of execution, because programmers in general, although they can look at a function and optimize it, they're not thinking about the whole.

They're not able to see the whole program at once and figure out, "Oh, in this branch, I never have to execute this code and that code." That's what the compiler can do.

Haskell, because it's pervasively lazy, is actually able to cut out huge branches from execution at runtime. Some of it it can do static analysis on and figure out, "If I get here, I'm never going to need that, which means I don't need that. I don't need that. I don't..." It can do some of that statically, but it's mostly a runtime thing, too.

It's also known as call-by-need, delayed evaluation, non-strict evaluation. There's all sorts of terms for this. If you see those terms around, it's all the same. It all means the same thing.

It reduces how much code is executed and also how much memory is taken up because if you have a big data structure and you never have to generate it, you've saved all that memory.

It lets you make really long or infinite lazy sequences, or lazy sequences out of something like the lines of a file. You might not need to read the whole file so you can stop reading from your lazy sequence and it won't read.

Instead of reading this huge 10-gigabyte file, you make a lazy sequence of the lines and then you can stop before you get the end of the file. You never had to read in the whole thing.

There's two types, explicit and implicit. Explicit means you tell it make this lazy. Implicit means it just happens for you, so either in the whole language on in a certain section of the language, certain data structures might be lazy, etc.

Please subscribe if you like this because in the next episode — I've got it all lined up — we're going to learn how to implement lazy evaluation in your language. If this is intriguing, then you should subscribe.

Also, if you have any questions, any comments, please email me. I'm eric@lispcast.com. You can also tweet me. The questions are preferred on email but if you want to just get into a little light discussion, Twitter is great for that.

I'm @ericnormand. You can also find me on LinkedIn. We should connect up. I'm trying to find all the functional programmers out there. Awesome. See you later.