# Monads in the real world

This is an episode of *Thoughts on Functional Programming*, a podcast by Eric Normand.

Subscribe: RSSApple PodcastsGoogle PlayOvercast

Monads are real, y'all. They are all around us. In this metaphor-free episode, I'll share two real-world monads you interact with all the time. No burritos or space suits, I promise! Plus, we'll see why monads are useful in Haskell.

## Transcript

Eric Normand: Today, we're looking at monads in the real world. By the end of this episode, we will have a pretty good idea of what they are based on real, concrete, actual, real world stuff. I hope we can see a little bit of how they're defined mathematically.

I just want to also say that this is going to be a completely metaphor-free monad explanation. These are not metaphors. I am trying to look for monads in the real world.

We can talk about other categories and how we see them in the real world, for instance, monoids. Lots of monoids in the real world, so when you have a toy train set, you can detach them and then reattach them, and they maintain the order when you reattach them, so it's a monoid. It's an associative. It has an identity.

There's a whole episode on that. If you want to learn about monoids, which are cooler than monads, by the way, go check out the monoid episode. This one's all about monads though. Metaphor-free, the real monads that you have dealt with before.

Before I get to the real-world things, I want to briefly talk about what a monad is in the abstract. We'll touch on these and explain this through real-world examples.

Monads are mathematical objects. They're values that have a couple of properties. One is that they're functors. I have a whole episode on functors. Basically, it means you can map over them.

Some people think of functors as containers, but don't do that, that's a metaphor. I'm not doing that in this one. A functor is something that you can map over, so a list, obviously, an array, you can map over an array.

You can map over a maybe, meaning you apply the function if it has a value. If it doesn't have the value, if it's nothing or none, then you don't apply that function.

Now, the second thing, so all monads are functors. Let's just get that out of our way. There's a whole episode on functors, you should watch that.

In addition to being a functor, a monad has a join operation, and that's the one we are going to go deeply into in a couple of real-world examples. So why are monads important? I'm going to try to explain this with an example you might be familiar with.

Let's say I was giving a gift to my friend, and I told my friend, his name is Bill. "Bill, I'm giving you a box of chocolates, happy birthday." He takes the box, you know, kind of a sizable box, and he thinks, "Wow thank you."

He goes home, and at home, he opens the box, and inside are lots of little boxes, boxes that fit inside the bigger box. He opens those boxes up, and inside of those boxes are the chocolates. Now, is he going to call me and say, "Hey Eric, you said it was a box of chocolates, but it was actually a box of boxes of chocolates."

No, he is not going to do that, he's going to eat the chocolates and just be happy. A type checker would balk at you. It would say, "Wohoho, you said a box of chocolates, but there is an extra nesting of boxes in there, that you forgot to mention."

What do you do when you have this type error? It is a type error. I was not very specific. I was not as pedantic as a type checker needs to be. I'm admitting that. I was acting [laughs] as a birthday gift, this was like very human level.

That's what this join operation does. It allows you to go from box of boxes of chocolates, to box of chocolates. Removes one level of nesting in a structured way that makes sense for that monad. That's all it is. It just removes that one level of nesting. In Scala and probably other languages I am not familiar with, functor is called mappable because it has the map method on it.

Monad, it also has the map. Remember, all monads are functors, but it has flat mappable. That flat is like you're flattening one level of nesting. That's called join. Let's look at another example. Let's have a to-do list.

You know it's like household chores and stuff, but then I get to one thing that is actually a big enough task that it has five subtasks. I want to make sure to get all of it done.

As I'm going through, I just go through one, two, three, four and then when I get to five, I have to do 5.1, 5.2, 5.3 etc. I'm still doing them in order. It's just they're nested in one list more.

I can do a join operation to flatten it into a single list, which is what I'm doing in my mind. As I read it, I flatten it. I read them sequentially from top to bottom. I ignore the nesting.

This join operation lets us do the nesting in the structure of our data, but then undo it when we want it to be flat. That's what join is all about. This to-do list example gives a clue about why something like Haskell uses monads for sequencing I/O operations.

Sometimes, an I/O operation actually has several steps in it. What it needs to do is flatten them all out into just like a long string of things to do. It doesn't want to have to go nesting there and become like a stack, as a tree, because it would turn into a tree, it can flatten it all out and just run straight down.

That's one part of the monad that Haskell uses. I hope that by showing this as a real-world example that you see that these aren't that hard to understand. Now, when you take them out of the real world and into a very abstract expression, there's not much to them because every monad is different really.

Every monad has a different structure and different way of joining different semantics for how they join. There's not that much in common, except that it's a functor.

It has a join operation. There's a few monad laws because it has to be associative and it has to have an identity. Those usually are pretty straightforward. Now, in Haskell and in Scala with flat map, remember I talked about flat map, the equivalent in Haskell is called "bind."

What bind does is it's really joining the map from the functor joining. It's composing the map with a join. It's sometimes more convenient to use bind than to use the map and join. It's more convenient to use flat map than the map and flatten.

That's why we see bind together defined as one of the two operations of monad, but really, I think that the join is easier to work with from real-world stuff. The map already comes from it being a functor, so you don't have to define two different operations. I want to go over Maybe, which is a type in Haskell. It has equivalents in other languages.

It's called Option in Scala, but it's really got the same semantics. Maybe as a type that has two possible cases two constructors. One is when you have a value and one is when you don't.

It's used for representing optional values. I either have the value or I don't. This has the two cases. In a lot of languages, you might use a value or null to represent I don't have it. In Haskell, there's a type that represents this.

That helps you, if you don't have a Maybe, you know you will not get null. There's only certain cases that you have to specifically call out. I may or may not have an answer on this function.

In that case, the two cases are called just and nothing. Just is when you have the value. You might say, "I have just five. It's just five." There's no other meaning behind it besides I have it and the nothing is when you don't have the answer.

How does join work on a Maybe because Maybe is a monad? First, let's do the functor. How does map work on a monad?

Let's say I want to have five and I want to map increment over that just five. We have to cover the two cases, let's do that just case first. I want to map increment over this value and it's a just. It's just five.

What it's going to do, because it's a just, it's going to return five plus one in a new just. It's going to make a new Maybe with a just constructor. That's got the answer to reaching into the Maybe, grabbing the value, calling the function on it, get an answer, put it back in the type.

It's a just with six. If it has a nothing, that's easy, there's nothing to call the function on so you just return a nothing again. Those are the two cases. You've covered it all. That's what map does over Maybe.

Now, what about join? In join, we have to deal with the nesting. I'll handle the easiest case first. This is the type, signature would be Maybe/Maybe Int. Nested inside the Maybe is a Maybe Int.

The easiest form of this type, the easiest value of this type, is if the outer maybe is a nothing. There's actually several cases that we have to deal with now because we have the nesting. The outer Maybe is a nothing, so that's easy.

I just return nothing. That's what the join us. Now, the other cases if the outer one is a just and the inner one is a nothing, we have to get rid of the nesting. I don't have a value. That's also nothing.

We have two cases that result in nothing, but if I have a just, just five, then when I join that, it becomes just five. It gets rid of one of the layers of Maybe on it. Those are the three possible cases.

If the outer one is nothing, it returns nothing. If the inner one is nothing, it returns nothing. Otherwise, it just collapses it into a single just instead of two justs. That's it. That's monad.

They're used a lot in Haskell because of this property of being able to unnest stuff in a way that you can expect and reason about. If I have a calculation that returns a Maybe and then I want to do something to the thing inside of it, but sometimes it's nothing, I don't have to think about that.

I don't have to put an if statement that says, if it's nothing then do this, but if it's something, do this. I don't have to write that anymore. It's that conditional that those two cases are handled automatically by the join.

That is why they use it in Haskell because it makes it easy to write the nice case where everything is returning values and the case where something returns a nothing happens automatically for you. That's why they do that.

This has been a deeper episode than I thought it would be. We've got into what a monad is mathematically. It's a functor with a join operation. We went over two real-world examples, "A box of boxes of chocolates," and also a to-do list where there's like nested lists in there.

We also saw how Maybe works in Haskell, and why it's used in Haskell, what it's useful for. Awesome. I hope this helped. If it did, you might want to see other episodes.

You can go to lispcast.com/podcast, and see all the old episodes. Each one includes audio, video, and text. You can also find links to subscribe wherever you want to subscribe and social media so you can get in touch with me and complain that I've created yet another monad tutorial.

This has been my thought on functional programming. My name is Eric Normand. Thank you for being there. Rock on.