What is tail recursion?

Tail recursion is a kind of recursion that won't blow the stack, so it's just about as efficient as a while loop. Unfortunately, not all platforms support tail call removal, which is necessary for making tail recursion efficient. We talk about what it is and how to do it, even if your language doesn't support it.

Transcript

Eric Normand: What is tail recursion? By the end of this episode, you'll understand what the term means and how it can save you from stack overflow. Hello, my name is Eric Normand. I help people thrive with functional programming.

This is an important term. If you're in discussions about functional programming, you'll hear it. I feel like it's important to know what it is. It is kind of a low-level idea. It's very much an implementation detail of whatever language or platform you're running on. It can have surprising performance implications, so it is important to understand.

The idea is recursion has a cost. This is normal recursion. Let's just talk about regular calling of function. Function A calls function, and function B calls function C. Then when they return, function C has to know where to return to.

There's going to be a jump in the instructions to go call the code for C. It has to return to the point in B where C was called, then it'll continue there. Then when B returns, it has to know where A is or where to continue.

The way that it's stored is on a stack. That makes sense because you add stuff to the stack, and then you pop it off. As you as you call deeper and deeper into the call graphs — A calling B and then deeper into C, and then C calls D — each time, you push an address of where to return to. This is called a stack frame because you're adding a frame to the stack.

Also, in the stack frame are all of the arguments that you need. Local variables are also stored there in the stack frame. This is good. It's a very nice implementation, data structure for keeping track of all these little variables you need for each call of a function and then where to return to.

The trouble is when you start using recursion, you start to have a lot of calls. It's not just ABC. It's a handful. It could be up to 100, but that's still a finite number. Once you start looking into recursion, where a function is calling itself, and that's calling itself could be thousands of times.

Each time, you're taking up a little bit of space on the stack. If you run out of space on the stack, that's called a stack overflow. It's very easy to do. This is one of the disadvantages of recursion when it's just done naively like that.

What a lot of languages do, the run times or sometimes it's done in the compiler, is they do what's called tail call removal. Imagine this. If function A calls B, and then B called C, but it's the last thing that B is going to do, it does a bunch of steps. Then the last thing it does is it says, "Return C open, close parens." It's calling C as the last thing it does to return.

That means that you don't need the local variables for B anymore. You don't need the arguments. You don't even need to return to B only to return to A right away. You could just return right to A. Just skip it. Just skip that intermediate jump. A is going to get the value that B returns. B returns the value that C returns. Just skip B. Go right back to A.

If you can detect this, if it's got like the return right on the call to C, then you don't need to make a new stack frame. You can reuse B stack frame. Then you're saving a stack frame for that call. It's not so important when you're just doing a normal non-recursive calls.

Once you get into recursive calls where, now all of a sudden, you're doing thousands, millions of self-calls. C calls itself. It calls itself. It calls itself until finally it stops. Each time, it's adding a stack frame.

If you have it in a tail call position where it's the last thing it does, then you can reuse that stack frame. You're not adding stack frames all the time. That's how you avoid stack overflow.

Tail recursion is just recursion in the tail call position. It's recursion in the tail call position. If you have tail call removal in your language, then boom, you have...It's basically an optimization. It makes recursion a lot more practical for your language.

That's the thing, is a lot of languages these days do not have tail call removal. For instance, JavaScript, I believe they're working on tail call removal as a proposal to extend the language, but it doesn't have tail call removal.

Why wouldn't you want to do tail call removal? The big argument against tail call removal is that you lose stack information. When you throw an error, you normally keep track of all the stack frames so that when you get the stack trace for that error, you see everywhere that was called in the chain.

If you're removing those from the stack — those frames are getting reused — you will lose that information. Your stack traces become less useful and maybe misinformative.

There's several proposals about this, like what do you do? One of them is you only remove tail calls, let's say, after three recursive calls of the same function. If A calls B, and B called C in tail position, you don't remove that frame.

If C calls itself three times, then the third time, you start reusing that last one. Then you can show the stack trace. All the ones that were just calling once are still there. You also get the benefit of the recursion not taking up infinite space.

Java also does not have tail call removal. It's also called tail call optimization or tail call elimination. It doesn't have it. I have heard for silly reasons, like historical reasons. At some point in the security system of Java, they count stack traces.

If they had removed the tail calls, then those stack traces or stack...They count stack frames, and they say, you can't run code unless you've got at least two frames up, which is a bad way to do it because it's relying on serious implementation details.

If they change that code that's counting stack traces, stack frames, then they should be able to do it. They're working on that, but it's deep in the guts of the JVM implementation. It's not something that we can just expect will happen in the next release.

Surprisingly, C, the GCC compiler, if you turn optimizations zone, will do tail call removal, which is pretty cool. Means recursion in C is a lot more useful than in Java. There are ways to mitigate the lack of tail call removal in your language or on your platform.

The first way is manually. If you know I'm going to be doing tail recursion, you could replace it with a for loop or a while loop. Let's say a while loop. Everywhere, instead of calling yourself, you just reset all the arguments and all the local variables so that their new values, what they should be after you call yourself, and then let the loop go back around. You just do it like a while true. You're just looping.

Instead of calling yourself, you just let it loop back around. Then, in the case where you want to stop looping, you use a return. You just return right out from inside the loop. That way, it's very much like tail call elimination but maybe harder to read. Of course, it doesn't use any stack frames when you're just doing a loop. It'll be faster. That's a benefit.

Then the second way is something that your language might provide. Clojure has a thing called explicit tail calls. It's a form and expression called recur which only works in tail call position.

Clojure will check that you are doing it in tail call position. It does a recursive call. It looks like a recursive call, but it's doing that for that while loop trick I just talked about. It only works for self-calls. Recur always calls the function it's in again. It does have half or 75 percent of the benefits of the tail call removal.

Also, being explicit is good. It's got another benefit. Whereas, a lot of times, when you do recursion, you're not thinking about whether it's a tail call or not. This makes you think about it. You can't get it wrong because it checks if it's in tail position. It will warn you or throw an error if it's not. It's a decent thing.

I still would prefer having full tail call elimination on the JVM. Rich Hickey considers it a platform feature and not a language feature. It's something that he's just going to let the JVM figure out on their time.

Let me recap real fast. Recursion has a real cost. Every time a function calls another function, there's a stack frame put on. If you're recursing a lot, then you're going to be putting a lot of stack frames on there. You could run out of stack frames. The stack is not infinite. When you run out, that's called stack overflow.

If you have tail call removal, certain number of your recursive calls are going to be tail recursive, so they will be removed. Then you won't have the stack overflow. Doesn't eliminate all of that, because you can still have stack overflow. If it's not in tail position, you could still overflow.

It's more space efficient. It's just like a while loop. In fact, you can mechanically translate tail recursion into a while loop. It's not difficult. The while loop might not look like the while loop would be you would have written. That's the only caveat.

All right. If you'd like to listen to more podcasts, I suggest you subscribe. You can go to lispcast.com/podcast to see all the past episodes. There are audio, video and text depending on how you like to consume this stuff. There it is.

You'll also find links to subscribe and to find me on social media. If you want to get into a long-form discussion with code and stuff, email is probably best. If you just want to have a question or got a disagreement or whatever, Twitter might be better. That's all I have for this episode. Thank you very much, and rock on.