Legacy in Functional Programming

I was honored to talk about functional programming and legacy code on the Legacy Code Rocks podcast.

Transcript

Welcome to Legacy Code Rocks, the podcast that explores the world of modernizing existing software applications. I'm your host Scott Ford. This show is out to change the way you think about Legacy Code. If you're like a lot of people, when you hear the phrase "Legacy Code," it conjures up images of big mainframes of our Cape Punch Car machines. Well, that's true, it only tells a small part of the story. Anything that someone else has left behind is their legacy. This episode is sponsored by Corabytes. Corabytes helps companies make their existing custom software systems more stable, scalable, and secure. Corabytes specializes in upgrades, bug fixes, performance enhancements, and other maintenance activities designed to help tech companies generate revenue, lower operating costs, and reduce risk. In today on the podcast, we have Eric Normand. Eric is an experienced functional programmer, trainer, speaker, writer, and consultant on all things functional programming. He started writing Lisp in 2000, and is now a closure expert, producing the most comprehensive suite of closure training material at purelyfunctional.tv. He has a popular closure newsletter and blog. He also consults with companies to use functional programming to better serve business objectives. You can find him speaking internationally at programming conferences. Eric, welcome to the show. Hey, thanks for having me. It's great to be here. Yeah, thanks so much. Thanks so much. I think you might be the first person we've had on the show who had such a deep focus in the functional programming ecosystem. Oh, cool. Some kind of curious, what are some common problems that you tend to see in Legacy Code Bases that are utilizing functional programming or a functional programming language? Wow, that's a question I don't think I'm prepared to answer. I haven't seen many Legacy Code Bases that use functional programming. I'm sure they're out there. There have to be. I mean, Lisp has been around since 1950, right? Yeah, exactly. Exactly. I mean, the first thing that comes to mind, Common Lisp was actually written in the 80s as a kind of an amalgam of different existing Lisp at the time. And one of the ways they made sure that it stayed relevant and practical was they wanted it to be backwards compatible so that existing Legacy systems could run on it. And so there's this really awesome package called Maxima that was a mathematics package similar to Mathematica or MATLAB. It just could do everything math related. Common Lisp was designed so that they wouldn't lose the value that had been put into Maxima. And one thing that you see with these big old systems in Lisp is that because they were written so long ago and efficiency was way more of a concern, how much memory it used and how much processing it did, there was a lot more mutation, a lot more mutability in the state, a lot of mutable variables, a lot of global variables, as most systems used back then. Yeah, I would say that's my only real. Well, let me put that. There's also Emacs. Emacs is legacy. Yeah. Yeah. Emacs is legacy. And then I'm thinking like, just we take the pejorative use of the word like C set it aside. Think of it as a system that was written in a functional programming language that you didn't write. You've inherited from somebody else. Sure. Maybe think of from that perspective in terms of common challenges that you see. Programs can tend to be very abstract. So if you haven't written it, it's often hard when you're dealing with something that's as abstract as like data transformations. And someone has written a system to take data in a certain shape, let's say it's some CSV system somewhere and turn it into data in another shape, another format that it has to talk, you know, and it has to translate between them. And over time, that pipeline of transformations can get really long and it's very hard to understand what each piece of it is doing. If you haven't grown up with it, just jumping into it yourself, it's like, wow, this is doing some weird esoteric operation on this one branch of the tree that's only necessary for who knows why you don't want to change it because you don't really understand it. But it's probably because they found some weird values in the CSV one time. And so it has to be in there just in case. And you know, that's one of the cool things about legacy code is that it does have that kind of baked in experience, like it learns, it has learned, but often it's totally divorced from the lesson itself, right? It's sort of like the end result of the lesson is like, oh, these five lines of code got put in, but you don't know why you can't figure it out. You can't go back in time and ask the person, yeah, exactly, if it why wasn't somehow preserved in some way, right, and you know, sometimes that gets put into like the get repository, you know, in the commit message or something, and you could maybe go back and figure it out. Like when did these five lines get added, but often it's not. So there's a lot, you know, there's just a lot of kind of, I guess they call it oral tradition of tribal knowledge. And I'm kind of curious, like when you say like things getting really abstract, I'm wondering if that's kind of in part because of how easy it is to basically write your own language inside of list, like, right, you can very easily define a domain specific language inside of a programming language like this, you know, because you can write your own functions and your functions look just the same as the built in ones. And then you've got the macro support, you know, leverage that to do some really interesting clever things. Is that part of what makes things very abstract or is that like a sword that kind of cuts both wielders with the wielder in the person? That's a good question. I would tend to say no, like I haven't done any real research on this, but my feeling is that, you know, defining new functions and even something like a DSL actually is an opportunity to become more concrete. It's an opportunity to put names to things. And so you can actually explain what the function's purpose is in the name, whereas the things I'm talking about are abstract because they're removed, at least one level removed from the concrete data expectations, like, why are we checking for null here? Didn't we already check for null or like, why here? Why not over there? There's all these abstract decisions that got made that probably were very practical at the time, but it's been lost. It's totally divorced from why. Now another cool thing about Lisp in general in terms of legacy is because it's so old as a tradition, I find that it Lisp systems and languages contain a lot of wisdom. You know, we're so often used to languages that were either designed by committee or maybe they were designed by one person and then somehow got popular. You know, I'm thinking like Python or PHP, you know, or Perl, you know, they were smart people and they did a really good job, but it also feels like it started from zero and didn't have a lot of legacy behind it, like good legacy. Whereas with Lisp, I mean, if you look at how it really developed over time, so it's 60 years old now and back in the day, maybe 20 to 30 years ago, or 30 years ago, probably more likely, is a university would have a computer and it would have a Lisp on it. And so someone would learn Lisp and like be apprenticed in its use and how it works. And then that person would graduate, you know, they get their PhD or whatever. And then they get a job at another university and now they get a new lab. It's got a new computer, but there's no Lisp on it. And it's been long enough that like there's a new generation of computers now that have a different instruction set so you can't just port it, right? And you're happy to have a new computer as opposed to this old one. So you write a new Lisp and you take what you learned and any new ideas that you have and you bake it into the new Lisp. And so you're taking the wisdom that you learned from the Lisp you learned and trying to import as much of that as possible and then adding new wisdom to it. And so then you have this kind of divergence and then convergence as grad students move between labs and there's attempts to standardize and make things compatible. So you have this over time of generation of wisdom, discovery of wisdom, however you want to call it, lots of divergence coming back together and combining the wisdom. And I think even in a modern language like closure, that is very evident that there's a long tradition behind it, a lot of good decisions made, a lot of mistakes made that you can learn from. And it's all in there. Awesome. My first exposure to Lisp was in university. I took a study computer science undergrad at Virginia Tech and one of the optional courses that I took wasn't a required course, it was an elective, was comparative languages. And the professor had us walk through solving the same problem in procedural paradigm language, so we used Pascal for that. And then a functional paradigm language, so we used Lisp for that and then a logic paradigm. So we used for log for that. And something that I found fascinating is how the program kept getting shorter, right? Only because the professor pushed us. So I remember the whole class did really, really well on the Pascal assignment. And the program that we're implementing, it was difficult enough to kind of force you to sit down and make sure you got it right, but it wasn't crazy challenging. So there was like a subset of the English language and we had to be able to write a program that diagrammed sentences that were written in that subset of English. Okay. So we had to identify the subjects, the verbs, the adverbs, noun, pronoun, you know, things like that. So we had to like basically map and diagram the sentence. So we ended up with lots of these little utility functions in the procedural language to check to see like, is this a noun, is this a verb, is this a, you know, to basically figure out what the keywords are when we turned in our our scheme, we were using scheme, we turned in the scheme version of our program, he handed it back and asked us to take another week on it and turn it in again. And he showed us what we could have done. None of us really saw the advantage of closures because I think I think it was the first time I'd ever been exposed to them because we were CNC++ curriculum, yeah, at the university. And I'd never thought of a function returning a function, right, and that you could then build other functions out of like you could even write another function that returned a function that made use of that other function that was right that function. So you know, he showed us how you could write a very simple function to just say, is this element a member of this list, and like, and that's one function, he goes, now you don't have to write that logic again, because he was showing us how we had we all duplicated that logic we duplicated it over and see our most procedural languages, you tend to do that. I'll just do a for loop here and check if it's in the list. Yeah. Yeah. So now we had a is match function and then we created an is verb function and it was basically just the list of verbs passed into the is match function, plus the thing that was passed into it. And I was like, oh my gosh, it's just, it cut at least a third out of my program. And I felt like made it read better. I remember like, just feel like I'm able to read this and understand it better. So I'm curious if do you experience that? Do you see that when you're kind of looking at programs that really kind of leveraged functional programming in a harmonious way, let's say, do you feel like you get that he's reading? For sure. Yes. I think it's complicated because most procedural languages that I use now have higher order first class functions these days. So it's so easy to just get all the benefit of them while you're still in the procedural paradigm. You're mostly in the procedural paradigm. But yes, I think that, how should I put this because I find that my JavaScript is shorter because I know closure, let me put that way. And a lot of it is because of the functional paradigm, but that comes with also a sense of what's possible in terms of the tool set that you could build. So you talk about this function that checks if a value is an element in the list, right? And that is a utility that you could write yourself, right? But he had to show you that that could exist, right? Right. We hadn't noticed that we basically wrote it three times. Yeah, exactly. Hard times are right. Like we knew. Right, right. And it might even be hard in C, right? You said you were using C for your procedural Pascal, but yeah, oh, even harder. Yeah. So it might even be hard to look at the Pascal code and see that it's duplicated and then see how to remove the duplication, right? It might even be like you'd have to really squint to see it. So yes, the fact that you have closures helps a ton with that. I go over this in my book. How to recognize, I guess it's like one very common pattern of like you have a for loop and how do you extract out the body so that because the body is what's different, right? The for loop itself is what's repeated, the, you know, for i equals zero and it's like a very common code, like how do you extract it out so because the body is what changes. You can't take just the first line because you need that open brace and syntactically it needs to be with the close brace. So how do you do it? Right. Well, you can extract out a higher order function that does that, right? Okay. So there is that. I think that that is a big part, but then another part is just having kind of a repertoire of the functional tools that are really useful. So this function that checks to see if an element is a member of the list, like, I don't think I'm smart enough to come up with that on my own, right? Like I would need to be shown. Likewise, when I'm in closure, there's a huge standard library of full of functions like that and you can, you know, look through it and be like, oh, yeah, I know when I could use that and I know when I could use that. And once you use it, you realize, oh, it's just three lines of code, like, it's not hard to write, but it's super useful and, you know, you use it all the time if you know it exists. And so then when I'm in JavaScript, I'll be solving a problem and I'm like, oh, what I really need is frequencies. I mean, it's like a two line function. If I had frequencies, otherwise it's 20 lines. So let me write frequencies in four lines, then I'll write the two lines and I'm done. Otherwise I could figure out this nested for loop or something that's 20 lines long and it's harder to read. But I think the reason I'm able to do that is only because some other smart person or people have put the time in to make those utilities. And that's some of the wisdom that has been passed down through the LISP tradition. And I think that that is one of the main advantages of learning a bunch of languages. It's that you do get access to all these different idioms you get exposed to like, oh, there are different functions I could write. It doesn't have to be for loops all the time. So yeah. I remember quickly like Scheme had very limited flow control, I don't remember having if statements even. It has. It has. Okay. Yeah. All right. It does. Maybe we were just told we could use them or something like yeah, like this is challenging. I mean, you need, yeah, I think you need an if technically, I mean, you could write it yourself if you had to, but it's easier if you give it to you. Yeah. Yeah, it is, I think it ended up like abusing a list match operation or something like that in order to get away with them. But see, that's the other thing is like you don't need if statements as much. I mean, you do need, but you start operating on lists and you could just say, oh, I'm going to do this operation to every element of the list. And if the list is empty, that's handled. It just doesn't do it at all. And so now you don't need an if you just do it to the list, which is, you know, it's one of those things I work in PHP a little bit. And it's kind of annoying the way some functions will return null instead of an empty array. And I don't know why they do it, but it's the convention. And then sometimes they even return false instead of an empty array. Okay. You have to check, you have to say, did I get something back? And then if you did, then you have to say, is it empty? Because I'm about to loop over it. And you see that code repeated over and over. If I have, you know, is it, is it not equal to null? Is it not equal to false? Is it the empty list? Okay. There must be a list. I'm going to an array they call them. I'm going to iterate over it. Well, why not just wrap that up in a function, you know, that just checks all that for you or better yet checks it for you and always returns a list. So if it's false, it gives you an empty array. And if it's already an array, it just returns it. And so that way you don't have any if statements. You just get it, you transform it, you know, if statement happens somewhere else, you transform it. You know, you can transform representation that you can just loop over. Yeah. Yeah. That's how you make languages that, you know, have warts a little bit more habitable. When you're looking at an existing chunk of closure or you list for similar code, this written in a functional paradigm language, what are some common cleanup solutions that you find yourself reaching for? And I'm kind of curious about ones that might be unique to functional programming. And I think we may have like started to touch on some of them in the conversation we just had. So cleaning up existing closure code. My go to when something just feels messy, but I don't know why is I just start pulling stuff apart, right? So I look at a big function and I'm like, this is doing too much stuff. And I just pull it into pieces. That's what I do. So it's it might be 10 lines long, I leave the function as it is, take out five of those lines. I have to, you know, figure out what five and I make it, make it a new thing. Name it. Now I call that function instead of having the code in line. And I often get it wrong. I cut it in the wrong spot, you know, but you learn that by doing it. And then then you just, you just keep doing that. You can break it into five separate line, the five separate functions. And you got it like kind of as atomic as I can get. Now you can put them together, however you want. Gives you a feel for the code and what it's doing. Also opportunities for reorganizing it, if it's messy. And another thing that I've always been curious about, like I've never worked on like a large production system that was written in a purely functional language, well, I guess other than JavaScript. But JavaScript, I feel like it's become multi-paranime. So you can certainly write procedural and object-oriented code inside that language and not even leverage any functional aspects of it at all. Whereas I feel like some functional languages force you to actually leverage the functional paradigm. So for systems that are built in more functional environments, I'm curious like how much common practice is testing? Yeah. I mean, functional programmers like testing, I think that they have a big advantage in that pure functions are much easier to test. It might not seem like we have as much culture around testing, but I think it's because we don't have all of the detailed like, oh, this is a mock and this is a fake. We don't do as much of that because with a functional code, pure functions, you don't need to, it's actually like better not to. It's better to just test is the output equal to what I expected to be. So for listeners who like might be relatively unfamiliar with functional programming, can you unpack what a pure function is? Yeah, sure, sure. So in functional programming, we call a function pure if it doesn't have any effect outside of itself and also that whenever you call it with the same arguments, you're going to get the same result. So this, you know, this is also known as a function without side effects. I don't really like these terms, but they're the terms that exist. Pure has too much of a judgment value judgment on it. There's an English. Anyway, I don't know how it translates to the other languages, but yeah. I mean, if you look at the history of the term, it's not a value judgment. It's just a lot of functional programmers in academia. They are also familiar with the term function as it is used in math. And so they look at a function in program and they're like, it's not really a function because it's not a mathematical function because it also like prints out to the network. You know, it sends a message on the network or something and no function in math can do that. A function in math and function of math is just like a mapping from inputs to outputs. So they call that a pure function just to try to say it's a mathematical function with nothing else, like no other side effects. That's where the term pure comes from. It's not great. I like the term calculation. It's just passing some arguments and you will get an answer. And that's it. This is just doing calculation. Yes. Just a calculation. It's a computation from inputs to outputs. You're also not allowed to do stuff like check the current time, right? That is a input that is changing. It has to be the same inputs. And usually those are passed in as arguments. So there's calculations also known as pure functions. And then there's actions which are impure functions. So functions that do depend on when you call them. They could give you a different answer at different times or they could have some effect, like sending an email to someone. And so making that distinction is like a really important thing in functional programming. And it turns out that because calculations don't have any side effects, their inputs are all very explicit. They're just the arguments. They're much easier to test. They give you the same answer anytime, anywhere. And you can run them as many times as you want without any trouble. So like you can run your test a thousand times and it's not going to send a thousand emails, right? Whereas if you had an action that was sending emails, you have to set up like a fake SMTP server or something to test it. And so one thing that functional programmers do is, I mean, you need actions, right? You need them. Like you need to send the email. That's the purpose of your program. You need to write to disk or whatever your program does. It needs to happen. But functional programmers tend to move more and more code out from an action and into a calculation. So one approach, say, if you have to send an email to every customer, you might write a code that fetches all the customers from the database, loops through them, and sends an email to each one. That's a very procedural way of doing it. And it's an action that that bit of code is an action. You know, you might wrap that in a function action. Yeah. Yeah. That's called like send emails or something. A functional programmer would say, let's break this down. So we're not going to fetch the customers from the database within our function. We're going to pass them in as an argument. Just assume someone else is going to give it to us. We don't know where they got it. They might have gotten it from the database or this might be a test. We don't know, right? It might just be a list of three customers just as a test. And then we're not going to send the emails directly. We're going to return the list of emails that we should send. And we'll assume someone else will send them or, like I said, someone is going to test that this was the right set of emails to send. And so you just have a transformation from a list of customers to a list of emails. Pure code, you know, it's pure calculation, no side effects, and someone else, some other function will do the action parts. We'll fetch it from, you know, coordinate, it'll fetch from the database, pass it to this function, take the return value, loop over it, and send the emails one at a time like that. Of course, that code is much simpler. It doesn't have to make any decisions like which email do we send to which customer. It's just looping through this one list, doing the same thing for each one. So in that way, the code for generating the emails is much easier to test. And then your action is much simpler because the action is where all your bugs are going to be, right? Did I send the same email twice or, you know, you specify the correct port or the quick server? Yeah. All those things are going to be in the action. And so you simplified the action, made it much easier to get it right, might still be hard to test, but it's easier to get it right. That's what we do. Functional programmers talk a lot about pure functions and calculations, but really the focus is more on the actions. The calculations are so easy, like we just want to move as much code into them as possible and then kind of forget about them because they're so easy. Yeah. So do you feel like the computation just don't get tested as much because they can almost just visually inspect them to see that it's doing the right thing? No, I think they get tested more heavily or I think it's the opposite. The actions are so hard to test that you don't put much logic in it, you know, you just put very straightforward, query the database, and then loop through and send an email one at a time. And it's much easier to just visually inspect that it's doing the right thing. And then the logic of like, oh, does this customer need an email? You know, maybe they're on the do not, or they have the do not email flag on. And like all that logic in there, you could test that to death. And it's more fun to test because you just, you know, write a million tests. It's done. When you're writing mocks, you're actually writing testing logic. I think the functional programmers approach is like, our actions are so tiny and small that we don't have to test them as much. That's our feeling at least. And so like, we started to touch a little bit on the next question I was going to ask, which is, does it feel weird when you are interacting or kind of forced to interact with things that change, like things that have natural, mutable state, like time or a file system or a console, you, those are things that just they inherently change because of their nature. So I imagine there's a little bit of friction between the way your brain thinks about interacting with those parts of your program, kind of you have the actions you were talking about versus the areas of your program where you know, state's immutable. Right. So this is where I think that the sort of practical functional programming diverges from the academic functional programming in that we write real software that does the same kinds of things that other software does. And so we, and I mean, as a commercial programmer, you know, in industry, not in academia. Give me a paycheck by, yes, paid by some customer, exactly, somewhere, right? If my software writes the right email, sends the right emails, I get paid. So we do focus on that mutable aspect. What I think is misunderstood is that I think that functional programmers actually have a better set of tools for approaching stuff like time. Whereas in a procedural program, time is everywhere. Every mutable variable, the current value of it is dependent on the whole history of the program. What branches did it go down? You know, you can't look at a line of code and say, well, at this point in the program, this global variable is going to be, you know, X, it's going to be the current temperature. Because what if there's another thread going and it's, you know, you know what I'm saying? Like something could happen in the history. And so, I mean, it all feels like it's just time all over the place and you're not even aware of it. Whereas the functional programmer looks at it like, okay, this thing is going to be the current temperature. At any point, we want any thread to be able to read from it without blocking and it will always have the most recent possible temperature in it. And so we set up this invariant. We're talking about time. We're talking about recency. We're talking about the current value. We're talking about threads because we know we can't control when the threads will read it. They're going to get interleaved in different ways depending on how busy the cores are on your CPU and how the scheduler works. We can't control that. So we have to set up invariants that make it easy for us to reason about what is going to happen. And one of those is, it's always going to have the current reading. And I find that functional programmers tend to do this more than other paradox than the two big ones procedural and object-oriented. We tend to think in terms of time, in terms of steps, it's going to go from one valid state to the next valid state in an atomic step because we don't want to be halfway in between steps and have to coordinate threads so that they don't read it during that time when it's halfway in between. We're going to do it in atomic steps. Like, we talk about that. We create little primitives that make it easy to set up these invariants. And so invariant, that's like just something that can't be false, right? It's something that you're like, you're just saying like, whatever is stated as an invariant will always be true. That's right. It will always be true and you want a small set of these that are powerful in the sense that they let you ignore details. The reason we use this variable is for modularity. We want this variable to always have some certain value in it. And we don't need to care where that it came from. Someone else set some other piece of code has set it with its own. It's got its own logic, its own procedure. We don't have to care over here. We just have to read it. We just have to read the variable. We just trust that it has the right thing. But then how far can you trust it? Is it a mutable object? Who else has access to the mutable object? Can they change it? What if they have a bug and they change it and you don't realize it? Or they reuse it. They just think, oh, I have this object. I could set it all I want. Well now you're reading this thing while it's being changed, right? And so that's, you know, let's put an immutable thing in there. Now anyone can read it and share it. Any part of the code can access it. But we don't have to worry about any of them modifying, things like that. Those are the kinds of invariants that we look at. Probability, invariants on how a variable is modified because you need to have mutable state. So how do you modify it? Well, we modify it only, like I was saying before, only in these very stepwise ways. Like, we go from one immutable current state, a valid state to a consistent valid, whatever you want to call it, to the next immutable valid state, to the next immutable valid state. So you only make these leaps and it's never in and in between state. No one can read the in between state. You can only read, you know, the valid states. Another one we go over in the book, an invariant is that if you're doing all these Ajax requests, you want a whole callback chain to complete before the next callback chain starts. Because the callback chain is, you know, if you're working on a web app, it's probably going to make some Ajax requests. And then at the end of it, it's going to modify the DOM. But because it's asynchronous and the callbacks happen out of order and depending on when the Ajax requests complete and stuff, you could click the button twice in a row. And the results of the first click are written to the DOM after the results of the second click. Right. So you add something to cart and then you add something, the same thing to the cart again, really fast. The second click could finish first and write the total to the DOM. And then the second one writes the old total. So this happens all the time. And what we want is, okay, we need an invariant. That shopping cart has to be able to be modified. We need to let you click twice. So like we need to make it so that the second clicks results, maybe the second click doesn't even start the, you know, the handler for that second, like doesn't even start until the first click is totally done. That would solve it, right? You write to the DOM of the first click, okay, now you can start the second click. And so in the book in Grokking Simplicity, we set up a queue. We make like a reusable queue that you can put callback chains in. And so the last callback, the thing that writes to the DOM has to run a little function is like, okay, I'm done, right, and that function actually starts the next step in the queue. And it's not that much code. I'm talking about all these things as examples because it's the kind of things that I think functional programmers think about. And I don't see it so much in outside of FP. Yeah. I think the paradigm forces you to reason about some of these, some of these details. Or maybe doesn't force, maybe encourages you to reason about some of these details. Like, you know, like an OO, it's just, you know, I have a long living object, right? And its job is to respond to messages and maintain its own state. And, you know, if two different threads happen to ask it to do something, then that object's behavior is defined by that object, right? Exactly. Exactly. Well, I mean, it could be that your language bakes that in. That's like, you know, like, for instance, Erlang, it has a mailbox. So when an object, the process gets a message, it gets put in the mailbox, and the process decides to check for messages, like, okay, I'm ready, I want the next message. So that way only one message is being processed at a time. There's a lot more control in the process. It's like a queue built into the language itself. Exactly. And that's, that's one of the things that gives Erlang its concurrency safety, whereas something like Java, you know, that method that you call, I mean, so many things, it could even be inlined, like, across class boundaries. So it's not even going and it's not like a message going somewhere. And it might not be inlined by you, but it might have been an optimization that combined. Yeah. Exactly. It could be, it could be the JIT. So it's not even like you run it locally, you do a thousand tests and it's not enough to kick in the JIT, but then in production, that thing gets called millions of times and the JIT comes in and now it starts failing and you're like all of our tests ran, right? This is the kind of thing that you know, that's why it takes books to learn how to write concurrent Java code because there's all these problems that you got to use the volatile keyword. It's like, it should be like the default, but it's not. So I think there's a lot of language design issues that come into play there. But I think you're right. The paradigm also is sort of, I mean, I don't want to say at fault, but like the initial sort of seed of the paradigm, the idea that you were talking about with objects that can send messages, there's no notion of some messages or some methods. Some of that can't be run safely the same way others can. The seed of your paradigm is some code is pure functions, it's calculations. It can be run a million times and it doesn't make a difference versus some code will make a difference depending on how many times you call it or when you call it. That's the seed. So that's what we think about from the beginning. That's like the initial like time is important in the paradigm. That's what I think the really big difference is. And when you were talking about actions and computations earlier, I thought that I had was I was kind of curious about, you know, I know like, you know, in Java and some O languages or even some procedure languages forced you to declare that the function body that you're writing is going to throw an exception or is going to, you know, could result in an error. Right. So I think it may be wondering like, are there some functional languages that force authors to say this is an action versus this is a calculation? Yeah, I think the Haskell Haskell is the closest to that because it has a type called IO and IO stands for input output because most most side effects are IO reading someone else. Yeah, reading from a file writing to a file or something like that. And so all of those side effects go in there. You cannot write to a file outside of IO writing to a file is an IO action. It's just the type, right? And so they're leveraging the type system to make sure or to know what's pure and what's impure. So like your procedural code goes in IO, read from the database, then send all these emails and that kind of stuff would go in IO. Yeah. I was even wondering about like linting and being able to give people hints at compile time and say, you know, you've got to, you have a function that you've decorated as being computational or pure function, but it's calling something that's an action. And so while your logic might be sound, you have a side effect in there that you might not realize. Yeah. I think that there's some usefulness in that. You know, if you could go through and maybe do something like say by default, every function is considered an action because you don't want the default to be, I didn't mark it. And so it doesn't know because they have nothing, right? So you start with the worst case. Everything is an action. And it turns out that actions are universal, right? So that's fine. Like I can go into that, but you could just mark everything as an action unless there's an annotation that says it's a calculation. And then you write some annotations and it's like, Oh, you said it's a calculation. But look, you call this function and that one is not marked. So I've defaulted to considering an action. So then you got to walk over to that code, jump over to that code and look at it and be like, okay, yes, that is a calculation. And then you put calculation on and it's like, ah, yes, but you forgot to mark this one. And so like you would, you would just like kind of spread out your tree. And then eventually you'd reach something. You either reach the end or you'd be like, Oh, no, that is an action. It's right. Now you have to kind of undo all those annotations. I bet you could, you know, eventually it would start paying dividends, you know, at first would be really annoying. But it reminds me of one property of actions. One of the reasons to avoid them is they spread. So if you write a function and you call an action in that function, that function is now an action. So what it means is like if you, I don't know if you're used to visualizing stuff as a stack, you know, so you start with main, main is at the bottom of your stack, right? Main is the entry point. It has to be an action. It has to because it has to do something. Then it will call functions. Some of them will be actions. But at some point, you're going to reach the last action on the stack. That action now is calling functions that are calculations. And those can't call actions by definition. So they're going to be calling more and more calculations. So there's going to be like actions at the bottom and calculations at the top. So you know, this spreading thing, if you had an action somehow at the top, well, now it's going to spread all the way down. Right. I think that's pretty neat that you get that separation like that in the stack. Awesome. Well, I feel like I could continue to chat with you about this for probably another hour. Yeah, likewise. I wrote a whole book on it. Speaking of which, that book isn't something we mentioned in your bio. So tell us a little bit about the book. Yeah, the book is called Grocking Simplicity. It's published by Manning. It is 550 pages, but an easy read about functional programming, about the stuff we've been talking about. The whole purpose of the book, why I wanted to write it, was because I feel like most functional programming writing is very academic. Even when they try not to be, they use academic terms like you're supposed to already understand them. And my idea was we need something to kind of define industrial functional programming. And what is the practical skills that functional programmers use all the time that could be practical outside of functional programming as well. So there's two parts, the first part is 10 chapters and about eight of them are all about actions, calculations, and data is the third thing. It's just the stuff that doesn't run. And how you go about refactoring large actions into small actions and small calculations. How do you implement immutability? All this very practical stuff that I feel like most books gloss over. They might have a sentence or two defining pure function. And when I feel like, no, that's the first eight chapters, this is all about pure functions. And like, how do you use them? Because that's what people want to know. And then the second part is all about first class functions using map filter reduce and how to chain them together. And then also the stuff we talked about with time and creating invariance, how to analyze the different possible timelines that are in your code and solve problems in them. Very practical things that we do in functional programming. No monads, nothing like that. We don't go to mathematical. Nice. So one of the things we like to ask everybody on the show is like, what's something you love about legacy code? So I'm kind of curious, what is it you love about legacy code? I like that it works. It's stood the test of time. One of my first jobs was at a company who provided, and this is in the late 90s, like 97, you know, they provided services to mini computers. So I got to see these washing machine sized computers running machine size computer being in a mini computer. Because I wasn't. You know, these things that they don't make anymore, but they still have parts warehouse, you know, you could still buy a big hard drive. It's like this big, like four megabytes or something, so ridiculous. But you know, they're like stored in oil so they don't rust and stuff like that. But we'd go around to different offices and, you know, this real estate office was running on one and the whole, a whole city's government was running on one, you know, it's like they had invested in these 40, 50 years ago. And that was the state of the art and they invested in all the software that ran on it. The system was cool. It's called a pick system and it had a database that you could program with basic and that you know, it creates forms that you see on your terminal. So you could write applications with this, it would store stuff in the database and do some logic and basic. And their whole payroll system was on this thing and, you know, they're reliable machines. And also what I witnessed at the time was people realizing that this is in the late 90s, you could get a PC, like a Windows 3.1 PC and get some emulator software and run one of these refrigerator sized computers inside of the PC, you know, is faster and cheaper. And you know, and you also got everything else that came with the PC, like it could connect to the internet, it could do all the other stuff. And you could buy the parts at Office Depot, you know, it's like totally different logistics problems. And you could see that transition. And there were a lot of old people there in the 70s still working. They knew how to program in basic. They knew these machines inside and out and they could service the software and write new software when needed, you know, I think that there's a lot to it. I think they were smart to think about a transition plan to like, let's emulate this or let's get the data out and write it in something that we can hire for. But it worked. It worked. It's great. So if people want to like, you know, pick your brain further and kind of continue the conversation, what's the best way for people to reach out to you? Go to my website, lispcast.com or purelyfunctional.tv and you'll see all the links for social media there, including email. I prefer email. So if you have a question, email me. All right. Let's go to the new. So people won't be sending you Twitter DMs then because they listen this way. Yes, your DMs is terrible interface, you know, terrible interface. I don't know how to manage it, you know, it seems like the kind of thing where you have to be in Twitter all the time, it's sort of similar to text messages. You know, you look at a text message, you read it, can't respond right away. It's just at the top of the list and it's going to get pushed down. There's no way to say like, I did it, get it out of here. I use Gmail. So I like, as soon as I'm done with an email, I archive it. It's gone. I don't have to look at it anymore. Anyway, that's my trouble with Twitter DMs. Awesome. Awesome. Well, thanks Eric for being on the show, really appreciate you taking the time to talk with us. For everyone listening, you please run out and grab a copy of Eric's book. Yes, please do. It's available now. It's not published yet, but you can get in the early access program. Yeah. So yeah, you have better. Give in the early access. Give it a read. Give Eric some feedback. You have a chance to make the book even stronger. Yes. And if you'd like to continue the conversation inside of the Legacy Code Rocks community, you can join up over at slack.legacycode.rocks and over there we have a virtual meetup channel that you can join and our virtual meetups meet Wednesdays at 1 p.m. Eastern. And yeah, it's kind of an often joke. It's a support group for people who are working on existing systems and it can be a great place to get some help or share some ideas or share some insights or get some practice on pitching something to a conference. All great things can be done there. And then we also have MenderCon coming up in May, May 15th is when we're going to do MenderCon and be a virtual conference hosted on the Hoven platform. So if you're able to attend that, please check it out, mendercon.com and thanks everyone for listening and we'll see you in the next episode.