A Theory of Functional Programming 0003
**Eric Normand**: All right, this is my attempt at writing a book. If you haven't joined me before, my name is Eric Normand. I'm writing a book called "A Theory of Functional Programming." The industry needs a good definition of functional programming. No one has provided that yet, and so I'm trying to provide it.
You're listening to me develop this idea. I'm going to try to turn this, what I say, into a transcript and use that transcript to put into a book that I'm writing. Welcome and here we go. Last time, I talked a lot about data. Remember there are three domains. There's data, calculations, and actions.
The data tradition goes back to the early days of writing, and functional programming largely tries to learn lessons from those instead of doing what object-oriented programming tries to do, which is attach code to the data, so the data is inert. It just is what it is.
The other domain is calculations. If you come from a more academic background, or you're thinking more in terms of features, you might call these functions, or pure functions if you want to be very specific. They're the mathematical notion of functions, which is a relationship between the domain and the range.
Now, as functions they are timeless. Calculations are timeless. Meaning that it doesn't matter when you run them. You're going to get the same answer. No one else cares that you ran them. No one will know because it won't have any effect on the world. It also doesn't matter how many times you run them, you'll always get the same answer.
These are mathematical functions. If you were going to implement calculations, most languages would provide you with first class functions, and that's what you would prefer. Now, some languages don't have those, namely Java or...I don't know if C# does, but the object-oriented languages often don't have a notion of a function that's detached from a class or from an object.
You'd have to use whatever facilities they give to you because it needs to be first class. Remember that. You might have a singleton instance or something that represents that calculation and some protocol, you have an invoke method on that instance.
Why do we care so much about calculations? This fact that they are code, they run so they can do work, but they're timeless. That gives us a lot of flexibility. The fact that we can run them zero, one, or more times and always get the same answer is great. I'm just going to go down my list. One thing that you can do is align them with mathematics.
They're already mathematical notions of functions, but you could have a really complex function that no mathematician would ever write. You could just write a big function that's got multiple nested if statements — and we've all written functions like that — and it's very hard to analyze. Very hard as a mathematician to look at this and discover interesting properties about it.
What functional programmers tend to do is air on the side of smaller functions that do have known properties. These are your standard algebraic properties, stuff like associativity, commutativity, idempotents, whether it has a zero or an identity, all of these standard algebraic properties, many of which you learned in high school algebra.
These are very important for being able to compose your functions up in regular ways. We're going to talk way more about algebraic properties. Basically, this is functional programmers leaning a little bit on mathematics. Of course mathematics goes back thousands of years.
The study of not the numbers but the study of the algebras and these properties of the entities that mathematicians call functions. Another thing is that, once you've got something with known algebraic properties, you can do algebraic analysis. You can define equalities, identities.
You can say mapping a function over a list is the same as writing that list with the function called on each of its elements. Those two things now become interchangeable. If you wanted to, you can swap them out because they're equivalents. You can swap them out. The compiler could generate the same code for them or, if it's going to generate different code, it could choose the code that's best.
As a programmer, you can do the same thing. You can say, "Right here I think it's clearer to use this version of this expression because it's equivalent to this other one and I can swap them out freely." The fact that you can do that at different levels is also good because that means that the programmer can choose the one that is the clearer expression.
The compiler can choose the one that will execute quickly, or with less memory, or whatever the constraint is. That really helps flexibility. Compilers are easier to write. Programmers can reason more freely with the code and be able to write the code that makes the most sense to them. Another thing about functions is that they are calculations, is that they are opaque.
In general, we're talking about Turing complete languages. You can basically do arbitrary stuff inside of your function. That means they're opaque. The only way to know what a function does, in the general case, is simply to invoke the function, to run it and see. You actually call it and you just wait for its answer. You don't know how long it's going to run. You have to wait.
That is one of the principal limitations of the function, is that you can't know what it's going to do until you run it. This is the general case. In many cases you can tell very quickly because, as an intelligent person, you can read the source code and reason about what the code is going to do.
There are going to be cases that even a person can't tell what it's going to do and whether it's going to terminate. Also, a lot of functions don't have source code because they are generated by composing other functions together or by doing a partial for instance, that there was never any code written for that particular value that is a function. That's a limitation of it.
We can talk more about the patterns and what we do with that later. What are the benefits of functions? I talked a lot about static analysis. One of them is that...Since functions have this property that...It's called referential transparency. What that really means is that you can replace a function invocation with the return value that it would get.
In a function invocation, it's going to have certain arguments. In place, instead of calling the function, you can just put its value because that's all it does is it's a calculation from inputs to outputs. Did I define calculation before? I should do that. A calculation is a computation from inputs to outputs. Those inputs are the only thing that vary, and the output is some computation.
It defines a relationship between those inputs and outputs. It's referentially transparent. This is an important concept. It's a silly name, but what it means is that return value is what you're looking for. When you call that function, all you care about is the return value.
If you can calculate that return value without calling the function, you can just replace it in there. That function does nothing except calculate this return value. In what cases would you do that? Let's say you have a compiler. You build into the compiler some special knowledge like, "Hey, all these mathematical operations, why don't you optimize those away?"
If you see two plus two, you just replace it with four. You don't have to put the code to calculate two plus two. Just put four because the compiler is smart enough to do that. If you have 10 times 10, just replace it with a hundred, you don't need to actually generate the code to calculate that at run time.
If you do enough of these optimizations, you could see this whole expression is now just replaced by this one value. It just recursively can take this bigger and bigger expression and turn it into smaller and smaller until it's one value.
Your compiler is taking advantage of this property of addition, multiplication, subtraction, and division, but this is referentially transparent. Because functions are timeless — plus and times are functions — it doesn't matter how many times it runs. It could run zero times and that's fine. What you're really after is the answer. The return value.
The other thing is, you can run it when you want. You can run it later. You can defer calculating the answer because maybe you won't even need it. You could say, "I'm going to invoke this function, but don't do it until I ask for the answer, until I look. Then, once I look, then you can calculate it for me."
This is actually called lazy evaluation and we'll get to that shortly, it's very close on the list. Another benefit of this run as many times as you want and whenever you want is that you can do testing. You can run this function and see if it does what you expect it to as many times as you want, and in any environment that you want, whenever you want.
This is very important for doing regression tests. Before you deploy to production, let's run all of our tests in this other server that's not going to hurt anything. You can run those calculations as many times as you want, on however many servers you want, with as many different test cases as you want, and it's not going to change anything.
You could have a bug and it'll notify you obviously, but it's not going to launch a million missiles. That's the nature of calculations, and you can do that because they are timeless. It doesn't matter how many times you run them or when you run them, they're the same all the time.
You can run them on your local machine, while you're developing, test them out. You can run them on your build server to test them out. Then you run them in production with actual data. Those are the benefits of using calculations.
The patterns that we use calculations for, one very common pattern is the data transformation pipeline. You can imagine you have some data. Let's say it's in the wrong shape. You got data from somewhere, and you want to put data over here, send it to some service. You need to transform it. That transformation is from inputs to outputs.
You could write one function that just takes that input, does some work on it, and then generates an output value that is what is needed for the next step, but because functions can compose easily — which we didn't talk about that, that's another benefit — you can actually make it a pipeline, meaning this function does, let's say, 10 different functions that do each a small change to the input.
They return a new thing which gets sent to the next stage of the pipeline, which gets sent to the next stage, until you have done all the transformations necessary to send it on to the final service. You can imagine having an action which, say, reads from a database — it's an action because it depends on when you read it — that generates some data. This data is timeless.
Then that data goes through this pipeline of 10 functions, data comes out the other end, and it gets sent to a service. This whole pipeline is a single...It's a composed-up calculation. We've composed 10 calculations into one big calculation.
This is a very common pattern of action to calculation, calculation, calculation down to other action. The action does as little as possible, just reads one thing or reads a collection of data from the database. Then the action on the other end just sends whatever you give it to the service. Then all the work is done in this big calculation. It's a data transformation pipeline.
That pipeline can be tested, made super robust, and can be made out of smaller parts that are composed up. Let's talk about that benefit of composing functions. We already talked about, last time, how you can take two functions and compose them. There's different ways to compose them. One way is we're taking two calculations, we compose them in what is, in mathematics...
This is going to get confusing. In mathematics it's called function composition. Function composition means I take a function that takes an argument and returns a value. When I'm going to make a new function that, when I pass it to argument, it goes to that function. Then that return value gets passed to a second function as the argument.
Then the return value of that becomes the return value of the bigger function. It's very easy to write this in code. What this lets you do is the compose function takes two functions and returns one function that will pipeline through the two functions. This is one way to compose calculations, is by making this pipeline, a chain of functions that data goes in one end.
Then, when it leaves that calculation, it goes to the next calculation as an argument. Then the return value of that gets passed to the next one, gets passed to the next one. Turn around and now it gets passed to the next one. Then the final one becomes the return value of the whole chain. You can package that up in a new function that [makes whoosh sound] will just go through all of them.
There are a number of well-known ways to combine functions in this way. Because they don't have effects, calculations are timeless. You can also — this is another benefit — use them a lot in higher-order functions and be...Oh, what is the word. Confident?
You can be confident that whatever you're passing it to it doesn't matter if it calls it, or how it calls it, or when it calls it, how many times it calls it. You don't care. It gives things a little bit more safety and more flexibility on the implementer of the thing, the higher-order function, of how it can work its magic.
An example of that is map. Map is a function that takes a function and a list. You don't care how Map is implemented. You don't care how many times the function that you pass it gets called or if it gets called at all. Maybe a better example is something like sort by. Sort by takes a function which generates the sorting key.
You could sort a list of lists by the third element of the list. Your function would be third. It will be called on each of the lists in your list, and that generates a key. Now, does it generate those keys one time and then sort it as an optimization, or does it willy-nilly call that key function many times just out of lazy implementation? It doesn't matter. You don't have to worry about it.
That function is pure. It's a calculation. It's timeless. It can be called any number of times. This is the mindset that functional programmers get into. "It's a pure function. I'm not going to concern myself with how many times it gets called." Now, if stuff starts getting slow, we're going to talk about that in a little bit.
In general, this is the line we draw. "Eh, I don't care. It can call it zero times. I don't care." All right, so that's higher-order operations. This is a pattern that higher-order operations are calculations that take other calculations and use those calculations to make decisions, do work, and stuff like that. Also, you can have higher-order operations that return.
You can have a calculation that returns another calculation. Maybe it's building up a calculation from smaller calculations, like we talked about before. We talked about laziness just a second ago. Because we're dealing with calculations, they're timeless. Another benefit we get from this is that we don't care if it even gets called because what if we never use the value?
If you have an if statement, maybe the value got used in the then, but not in the else. Maybe it matters what...You can do this manually. You can push calculations around. If you're writing some code and you're not in a lazy language, you can push the calculations into the if statement so that it only happens when it's important.
In a lazy language, you can put it all up at the top and just say, "Well, I'll define this. This name means this calculation. This means the..." Then you don't even have to worry because the laziness of the language will take care of it. That's just a convenience, but it's something that calculations let you do.
Now, there is a very important benefit of laziness that is more than just a convenience. It is actually a useful way to decouple things. You can have a function generate a list, and a very large list it could be. The consumer of that list can decide how many items it needs to read. You've decoupled the production of the list from the consumption of the list.
The producer, because it's producing a lazy list, can just produce the whole list. It is simply saying, "Here's the whole list," but the language or the run time is making it lazy. It's not actually calculating anything. Then the consumer can decide, "Well, I only really need 10, so I'm just going to cut this list after 10.
"Those are the ones that I'm going to realize, to make real, to actually calculate." You've separated out the decision of how to generate the list from how big of a list you're going to generate. With laziness, what happens is all this gets so decoupled that it's even like little sub-pieces of the lists aren't getting generated, that aren't being used.
This is actually pretty neat. It saves a lot of calculation. That's a benefit if you've got laziness in your language, which some languages do. Some testing patterns. Unit tests are very easy to write with calculations. Basically, you just give it example arguments, and you check the return value.
You can check it for the return value is equal to a specific value or has some other properties like number of elements in the lists or whatever properties you want to test on it. When you get into property testing, you can actually generate random inputs and check the return value for certain properties.
The reason you can generate random inputs is because you understand that this calculation has certain valid arguments and that the return values are going to have certain properties that you are interested in. If you align your calculations with mathematical functions with known mathematical properties, you can get a lot of your testing done for free.
Not free, I'm not going to say it's free. It is very cheap. Because it's timeless and you can run the function as many times you want, you can generate millions of test cases and cover a much larger set of inputs than you would normally be able to. I'm right at 29 minutes right now. I usually go 30, so I'm going to end here. The next time we're going to talk about actions, the final domain.
Awesome. Subscribe. Like. Do what you have to. Mash some buttons on your interface. Make people know how awesome this is. All right, see you soon. Bye.