What is referential transparency?
This is an episode of Thoughts on Functional Programming, a podcast by Eric Normand.
Referential transparency is a term you'll hear a lot in functional programming. It means that an expression can be replaced by its result. That is, 5+4 can be replaced by 9, without changing the behavior of the program. You can extend the definition also to functions. So you can say
- is referentially transparent, because if you call it with the same values, it will give you the same answer.
Eric Normand: What is referential transparency? By the end of this episode, you'll know what this term means and why it's important in functional programming. My name is Eric Normand and I help people thrive with functional programming.
This concept is important because it's used a lot in functional programming circles. Because of that, it's important to understand what they mean by it. The term could come up in conversation, in even documentation, or a comment somewhere.
I do want to emphasize that this term is one of those terms that was taken from previous work, before functional programming, and kind of changed. This is not the original definition. It's not the definition you'll find in linguistics, which is where the term originally comes from, philosophy.
This is what functional programmers mean right now, so what is it? Referential transparency means you can take an expression in your program, and replace it with the result of that expression.
As an example, if you have the expression 5+4, you can replace that with 9. That's what it means. Now 5+4 is easy because you can do that in your head without running the program. There are some expressions that you don't know yet what they're going to give you.
For instance X+Y, you don't know what that's going to give you, because you don't know X yet, and you don't know Y yet. What you do know is that if X and Y are the same as last time, X hasn't changed and Y hasn't changed, it's going to give you the same value as before.
If X is 3 and Y is 7, it's always going to give you 10. This is referential transparency. It means you can replace the expression with the result of that expression.
Another way to look at it — this is the way I like to look at it — is it means it doesn't matter how many times you run that expression. It doesn't matter how many times you call that function. Sometimes that expression is a function call. Sometimes it's like an operator. Sometimes it's some other kind of expression in your language.
It doesn't matter how many times you run it. You're always going to get the same answer. When I say it doesn't matter how many times, it means even zero times is OK. For example, in the 5+4 example that I gave before, the compiler can replace that 5+4 with 9. So, basically, at run time, your expression never runs. It ran once at that compile time just to give you that number.
Or, it might not ever need to be run, because that line of code never happens. Because that value wasn't needed. We'll get to that in Lazy Evaluation. That's what that's called. Another way to look at it is if you give the same arguments to the operator or to the function, you're going to get the same result every time. It doesn't matter how many times you call it.
You can call it zero times, one time, a hundred times. You're always going to get the same results. Now, notice that you can't do that if you have side effects in your function. It has to be a pure function to be referentially transparent. If my function sends out an email or moves a robot arm or deletes a file on the disc, it's not going to be referentially transparent, because it does matter how many times it runs.
I can't just replace that expression with its return value. If it moves a robot arm and then returns how many inches it moved. So, it's 10 inches. Then I run it again, and it's like, "No, I didn't run again. I replaced that with 10 inches." It's not going to work. You need to be able to move the arm every time you run the function.
Likewise, if you're doing something like fetching a Web page, reading a file from the disc, calculating a random number, giving the current date — and those are functions — those are not referentially transparent either. Every time you read the file from the disc, it could be different. Some other program might be writing to it. Same with a Web request.
You could get a different page each time. The date is obviously going to be different every time you run it, because you might run it today, you might run it tomorrow. The random number, you want it to be different every time. It's not referentially transparent.
You can't replace the call to the random number generator with whatever the compiler found the first time it called it. It's not going to work. You actually need to run it. You can't just replace it with its expression. This is important because it's one of the properties of pure functions of calculations.
I like to call them calculations, but you might know them better as pure functions. This is one of the properties of pure functions that we can use in our programs or our programming language to get some benefits from. I've already mentioned you can replace 5+4 with 9 at compile-time and you get the same resulting program.
You don't have to run that at run-time because it's always going to give you the same answer. That's an optimization that the compiler can do. Likewise, you can do optimizations like lazy evaluation. What lazy evaluation means, is it says, this expression will be called either zero times or one time.
Not more than one. Zero times, it's fine, because we said it doesn't matter if it's called zero times, because you might never need the answer. You might not need the answer to X plus Y, so why run it? Just remember, if I ask for it, then you calculate it.
If you do need the answer, it calculates it one time, and if you need the answer again, it just uses the pre-calculated answer. That's lazy evaluation. I should have an episode on that. That sounds like a good thing. Let me make a note.
The other thing that you can do is more like caching or memoizing. Lazy evaluation is saying, "We're going to run this zero or one times."
Caching and memoization is very similar. What you're saying is, "After I calculate it once, I'm going to store it and return that the next time."
The famous example of memoization is memoizing the recursive version of Fibonacci. If you do a recursive definition of Fibonacci, there's a lot of repeated work. You're going to be calling Fibonacci of two or Fibonacci of three a lot of times.
Because every time you call Fibonacci, it's going to call two more Fibonaccis and add them up. Each one of those is going to call two more Fibonaccis and add them up. Each one of those will call two more so — it doubles every level of that tree.
A lot of them are going to be the same because they're going to start running into each other. You should program it just to see what I'm talking about. If you do Fibonacci of a high number, it takes a long time to calculate and it's because it's calculating the same thing over and over.
You can memoize this function — Fibonacci — so that every time you call it, it's actually checking, "Hey, I've already calculated this one. If I have, I'm not going to calculate it again. I'm just going to return it." It kind of short-circuits the recursion. You only have to do that the first time. After that, it'll just use this cached value. It's very similar to lazy evaluation.
Lazy evaluation is like a generalized thing you can run on any expression. Sometimes the whole language will have every expression be lazy evaluated behind the scenes. You don't even think about it, like in Haskell. Whereas memoizing is something you do specifically to a function as a way to optimize it.
Like I said, this is a property of pure functions. I want to talk about why I don't like this term just a little bit.
I already [laughs] mentioned that it is taking a term that already exists and repurposing it, and changing the definition a little bit without reference to its sources, really. That's one thing I don't like because then you can't really communicate across domains. They use it a different way in linguistics and even in computer science.
There's this whole other branch, not functional programming, called the programming language semantics, that uses it in a totally different way. We have to be careful about that. The other thing I don't like about it is it's like pinpointing this one little thing about pure functions. That is part of the definition of pure functions.
Pure function has no effect. The only thing that's important about the function is its return value. By definition, that means you could replace it with its return value. It's really not any different from pure functions, not in any practical meaningful way.
It's a term that, when people throw it around, sometimes it sounds like, "Don't you just mean that's a pure function?" "Do you really have to say it's referentially transparent?" "Are you just trying to sound smarter?" That's my nitpicking with the term. It's important though because even though people are just using it to sound smarter, they are using it, and you have to understand what they're talking about.
It also has an opposite, referentially transparent. There's referentially opaque, which is the opposite. Referentially opaque, it means you cannot replace it with its value. You actually have to run the thing.
If you do something like a GET request to a Web server, you can't just replace the GET request with whatever the compiler got when you compiled it, because the Web server on the other side could change that page at any point. You actually have to call it each time.
Same thing for writing out to a disk. You can just not right out to the disk because you said, "Oh, my compiler has optimized that away." No, you have to write it out each time. That makes it opaque. Let me recap. Referential transparency is a property of an expression.
We can extend it to a function calls or to functions that says that if you give it the same arguments, you can always get the same answer. That means that you can replace that whole function call with the answer. That is, if you know the arguments. Even at runtime, you can do that, which is lazy evaluation.
It means that it doesn't matter how many times you've run it. You don't even have to run it at all. You can just replace 5+4 with 9, never run the plus, and the program is still correct. It still gives you the same behavior.
It's using lazy evaluation, compiler optimization, caching, and memorizing. Here's the thing I didn't mention, it's useful when you're dealing with the algebraic properties of an expression. You don't have to worry about how an expression got calculated. You could always just replace it with the value. What's important is the value that came out of it.
That's a nice thing when you're reasoning about your program. You don't have to worry about how the thing got calculated. You can treat this function or this function call, this expression, as a black box. In the same way that you could just say, "Well, it's going to be replaced by its value."
The opposite of it is referential opacity. People don't use that as much as they use referential transparency. I've said my piece on referential transparency, but what I haven't really given is that original definition. I'm talking about original computer science definition. I'm not saying like go back to a Quine, who was doing this in the philosophy of language.
I'm talking about even in computer science, it was used before. I didn't go over that. If you want me to, I can, in another episode. Let me know and I'll make an episode about that. If you've liked this episode, you should subscribe because there will be more like it. I love to see more people on this channel.
The reason I do this whole show is to help spread these ideas to bring some clarity to them as much as I can. A lot of people will talk about just the term and define and maybe use it a couple times. I like to talk about how the term is used and how people are using it, whether it's important, things like that.
Just trying to go a little bit deeper than the simple definition because these things are important, that we don't forget the history of the term, that if you're reading a paper on it, and you don't know why it seems a little different from the way people seem to be using it. The terms have history. It's really the usage of them that matter.
Anyway, if you're into that, please subscribe. If you have a question, if you want to get in touch with me, if you want to tell me that you want this original definition, from semantics, programming language semantics, you can email me at email@example.com. You can also tweet at me, I'm @ericnormand, or you can find me on LinkedIn. Cool, I'll see you next time.