Why do I like Denotational Design?

Denotational Design is a abstraction design process created by Conal Elliott. I like it because it really asks you to step back and design the meaning of the abstractions before you implement them. In this episode, I talk about why I like it, what it is (step-by-step), and why it's not about static types.

Transcript

Eric Normand: Why do I like Denotational Design? In this episode I hope to explore this really cool design process that uses Functional Programming. Talk about why I like it and also talk a little bit about the difficulties using it in a statically typed versus a non-typed language.

Hi, my name is Eric Normand and I help people thrive with Functional Programming. I was asked on Twitter by Scott Nimrod a really great question. I'll read the question. It's actually got two parts to it, but I'll read it. The question is "I'm curious to learn why you are a fan of denotational design, even though it's more challenging in a dynamic language like Clojure?"

That's got two parts really. The first part is, "Why do you like denotational design?" Let me address that one first. Denotational design is a cool design process. In fact, it is a design process that has borne real fruit. Denotational design is the process that has been elaborately developed by Conal Elliott. He used it to create Functional Reactive Programming, which you have probably heard of.

Functional Reactive Programming has boomed into several different reactive style paradigms. It's a real thing that helps make new things in the world. It does it at a very low level, very abstract. It creates a lot of blooming, like different languages implementing in different ways, offshoots, things that aren't FRP, but were inspired by FRP. That's functional reactive programming by the way.

He's currently using it to rethink and redesign deep learning, neural networks, and back propagation and all that. I'm looking forward to that. The work is still in progress so there's not much to report on yet, but there's talks out there.

I like that it's a real design process. As our industry matures we're going to need real processes for coming up with these novel things.

How do you actually model a thing in the world? We don't know how to do that. Not in a step-by-step process. Not in a way that we can collaborate well with each other on.

We're just kind of ad hoc, making things up, designing them on a whiteboard, drawing arrows and things. This is a step-by-step principle process. From my exploration of it, I think it's really cool.

I particularly like that the first step is kind of a "Let's step back from any notion of implementation," because we're doing design here. We don't want to jump in and say, "An image is an array of pixels."

That's too soon yet that's where most of us start. If we're going to start, we're bringing a lot of implementations baggage into our design before we've even thought about what we wanted to do.

I think that this is a very important step that we have to take ourselves. We say, "Oh, but it's got to be an array of pixels because that's the most efficient thing for the library I'm using, but Oh"

You're choosing a library already? Step back for a second and think. What is an image? What does it mean? What does it mean to have an image or to be an image? What does it mean?

Conal Elliott had a two-and-a-half-hour lecture. He goes over how he would design an image system, a graphic system.

He said in his system, "An image is just a function from a pixel location, so an X, Y coordinate to color, where X, Y are in the real number space. It is continuous. It's not like an integer, like an array index. It is real numbers. You can ask for any point in that X, Y space and get a color for it."

This is stepping way back and saying, "What does it mean to be an image?" You might disagree with that definition, but it is a much more abstract notion of image than array of pixels. That's what's important, is that he took that step back and said, "It's a mapping from location to color and I can represent that as a function." Functions are mappings.

I like that stepping back, going back to first principles, really rethinking the definition of something. Another step in this process...I will go over the process just after I talk about why I like it. The other thing is he uses algebraic properties and category theory. I think algebraic properties are a very good indicator that you are, "on to something" in the design.

When we're making design decisions, we need some assurance, something to follow to see if we're getting somewhere, if we're going in a good direction. Algebraic properties seem to be a pretty good way to do that. This is my opinion, of course, but it's why I like this process. I'm not sure what else you would choose besides something like developer ergonomics, or efficiency, or something like that.

Algebraic properties are a good gradient to follow to get to a good design. That's why I like denotational design. I've used it myself as a way of stepping back and thinking through what something means, and then bringing that forward into an implementation eventually.

I have watched and read as much as I've been able to find that was accessible to me and my level of understanding of category theory and stuff. I've watched as much as I could, read as much as I could on denotational design. I've even given talks where I've tried to come up with a process behind it, like a step-by-step process.

Unfortunately in all the material I've found, it starts off step-by-step. They'll say, "OK, this is the first step. This is the second step," then there's no more third and fourth, no two, three...It trails off into, "Isn't this stuff cool?", "Isn't this nice?", "Look where we've been." There's no more steps.

I've watched it and re-watched it, trying to see the steps, doing it myself and saying, "Where did the steps go?" This is my overlaying on him. This is not what he's calling denotational design. It's what I've managed to pull out of his material on denotational design. I would love to collaborate with him to come up with a concrete list of steps to do.

These are the four steps that I see that are clear enough. It started out as three. Then I've used it a bunch and I said, "No, there's a fourth step in here that I'm skipping." Steps are important because you need to be able to teach it. If you can't say, "Now, we're doing this step," someone might get lost. Feel like, "I'm doing what you are doing, which is just wandering around." No, I'm not wandering around. I'm going on this path, step-by-step.

This first one is to...I think of it as zenning out and forgetting all the implementation assumptions that you have made. Every time you've made let's say a computer graphics system, every computer graphics system you've made, and introspecting, meditating on it, what is an image? You just ask this very basic question and get at some fundamental idea.

You empty your mind, you go beginner's mind, and you just really try to think, what is an image? The same thing like, well, if I have a teacup, and I break it, is it still a teacup? That kind of question, very Zen Kōan kind of question. That's where you got to go. It's like a philosophical question about meaning. What does an image mean?

Then you do that, and you come up with a definition. Something that you can encode in your language. He uses Haskell. He often will use a function like image as a mapping, as a function from a point on a plane to color, so X,Y coordinate to color. Then color, what is color? You could ask that question. Or you could just cop out and just say, "Oh, it's just an RGB."

Whatever, whatever you do, but that image question is the one he was after so that's the one he does in the video. Then you explore this. You say, "Well, if I have this type, it's a function type, then what operations can I do on this type? Can I express all the things that I expect to be able to express? Can I translate the image?"

Yes, because I can make a function that takes an image and returns a new function that translates by adding the X and the Y and you can see how it's done. You just elaborate and explore that definition. What does this allow me to do? Can I find some minimal set of operations that other operations can be defined in terms of? You're exploring it a lot based on the ease of implementing.

How clear is it to implement this operation using the type that I have? This is something that I've added myself. I don't remember Conal ever talking about this, but design in other domains and other processes is very often incremental. You don't get it right the first time.

You have to go back and revise and you make an attempt in a certain direction, and you learn something, and you bring that back to the beginning. You start over and you make a new thing. In his presentations, it's usually very linear, which is a good way to present it.

I think it's important to remember that you might not get the image definition correct the first time. The best possible definition. You're going to have to step back again, with the stuff that you've learned and incorporate that into your definition.

The next thing that I see in his steps is that he starts to align things with category theory concepts. A lot of times, you'll say, "Whoa, that image is actually a functor." You see because we're doing this transformation like a translation or a rotation as a functor on an image.

You start to align these things with the category theory concepts, with algebraic, abstract algebra kinds of things. That helps guide and mold the design, the order of the arguments and things. You start to see, oh yeah, this will fit in here and this fits in there because category theory, one way of looking at it is a kind of an algebra of composition. How do functions and objects types, how do they fit together?

That's what that is. That's what you're doing, is you're trying to align it so that it fits and all these things. I actually think that that's a very fruitful way of finding some kind of gradient that you can follow to find the good design. There are others, like I said, developer ergonomics is one, performance is another. Those are things you could follow.

Then the interesting thing about this is that at this point he's implemented very little. He has not implemented a way to draw these images on the screen, for instance. He just has defined it as a type. Some functions are implemented because he can implement them easily in terms of other things.

I talked about the translate, it's very easy because you can just say, will translate returns a new image with the X and the Y added to this other X and Y. It just moves it. You haven't defined how to make one of these images yet, you just know the type. That's one of the nice things about type systems is they give you a thing you can reason about without actually having an implementation. That's pretty cool.

Then the final thing is actually implementing it. You have the design. You've gone through this incremental process of design. You've iterated on it multiple times, you've gotten it very clean and nice before you've implemented it. Now you know exactly what you need to implement. Often that implementation is very straightforward. Sometimes it's still hairy because you're dealing with another system.

What you need to develop and design...implement. Sorry, I don't want to use the word design.

What you need to implement is very clear. Well, I forgot what I want to say. All right. Let's talk about the second part. The second part of this question. I'll read the question again. I'm curious to learn why you are a fan of denotational design, even though it's more challenging in a dynamic language like Clojure.

Like I said, Conal Elliot uses Haskell for these things. Haskell as a static type system where category theory can be expressed pretty well.

The types let him reason about the meaning of the system, before he even implements it. Why closure? Why wouldn't I do this in Haskell? Is it more challenging in Clojure?

I want to start this off by saying I don't want to compare Clojure to Haskell in this episode. That's not what this is about. I'm trying to address this very specific question about whether it is more challenging in Clojure to do denotational design.

If I say anything about Haskell, I'm not saying it's bad. I like Haskell. It's a great language. I used to work in Haskell. Right now, I prefer Clojure, so you'll know my bias is there, but that's not what I'm talking about right now.

The first thing I want to say is for doing denotational design, Haskell isn't perfect either. It's challenging in Haskell. I'll name a few things that are either impossible or not really helped by Haskell. Haskell has no type for real numbers. Most languages don't. How do you represent a continuous number with arbitrary precision?

We don't have a good way of doing that, especially an irrational number. How do you represent that? That's the first thing. It's something that comes up in forums a lot. People are asked, "You say you want real numbers," but that's impossible. If you try to compare two irrational numbers, you never know if like one more digit, they're going to be different. That's a challenge. This is something that he thinks is important for the domain model, but it's not. It doesn't happen in Haskell.

Another thing is, when you're talking about say, the Monad laws or the Functor laws, associativity, you're talking about equality between two different expressions. If you're talking about commutativity, F of A and B = F of B and A.

What if A and B are functions and it returns a function? You can't do that equality comparison. Conal talks about this as well in his talk. He's like, "You know this is not valid Haskell." You cannot do equality comparison between functions and get the answer you're expecting.

If they happen to be exactly the same function like they're both assigned to this variable, they're both given the same name and you know A=A. Yeah sure, but if they were produced in two different ways, there's no way to compare the two functions to see that they're going to always have the same return value for all arguments. There is no way to do that.

Finally, commutativity and associativity, very important properties. There is no way the type system helps you with that. There's no way to express that in Haskell. When you develop, let's say, a monoid, because we're talking about associativity, you just have to trust that the function you give it is associative. The type system can't check that for you.

How do you check for equality between functions and for these algebraic properties? You could use property-based testing and Haskell has a good property-based testing system. Probably several. Clojure has one as well so we do the same thing there.

Now I want to talk about whether it's hard to do denotational design in Clojure. I do think it's a little harder than in Haskell, but I also think that most of the design part is happening in your head. You're making little notes, like "Oh, this type of this function. Type of image is this."

That helps you, but you're not running that program through the compiler to have it check it for you, because you don't have anything for it to run.

Maybe people have a workflow for testing out types that don't have any instances of them. You know what I mean? There's no function that has that type. Can you still write a program that runs it? I don't know, I'm not sure. If you could, then it would be easier in Haskell. What I'm saying is, most of the reasoning that's happening in your head. The Haskell type system that you're reasoning about in your head helps. I've been there. I worked in Haskell.

I have internalized enough of the Haskell type system. Certainly not all, OK. I'm not a Haskell expert or anything, but I've internalized enough of it that I can do that in my head even in Clojure. I do make notes about the types. I do make notes about, "OK, this is an image. It's going to take a X, Y coordinate and it's going to return a color." I do make notes about that.

Yes, it is a little bit more challenging, but it's 90 percent of the work is done in your head anyway, so it's not a big change. I have used spec a little bit to help put the design, but it's a little bit cumbersome to do it. I'm going to keep trying to use spec to do that. Spec is a Clojure library for defining the types, structure, and values of arguments that you're interested in.

I hope to find a way of doing that, of using spec to maybe even get a leap over Haskell, because it lets you automatically create generators for property-based testing. You're able to talk about the values because the checks happen at runtime. It's not as good about reasoning about functions. Clojure doesn't have currying, which is helpful. I still find that the process of stepping back, zenning out, exploring the definition, what things would look like.

Those two steps don't require the type system, don't require type checking. The construction along the lines of category theory abstractions, that does get into that category or that difficult stuff. Like I said, those very important algebraic properties aren't helped by the Haskell type system either, so I don't feel like Clojure is that far behind in it.

I do worry that it does require you to know the Haskell type system, in the sense that I've internalized it. A Clojure programmer would look at what I'm doing and say, "Oh, it's too much work. It's too abstract," whereas I'm just doing very concrete steps that I've learned through experience from the Haskell system. I'm a little worried about that, but I think we can work through it.

I hope that answers that two-part question. I'll read the question again. This question's from Scott Nimrod. Got it on Twitter. It's, "I'm curious to learn why you're a fan of denotational design even though it's more challenging in a dynamic language like Clojure."

To sum this up, I think that denotational design is a lot less about static typing than this question implies. It's a lot less about types. It's about design. It's about going back to first principles, building things up, understanding how things compose, and following a different gradient from what most people use when they design.

Most people use something like either performance — what they perceive would be faster — or developer ergonomics like, "Oh, look what I can do if I put the argument over here. It lets me write this which feels cooler. It's shorter."

Something like that, as opposed to algebraic properties, which I think they have precedence in reality. [laughs] Let's put it like that. Associativity's a pretty common property at least around my neck of the woods.

I hope that answers the question. Thank you so much, Scott, for the question. I know you have a few more, but I'll save those for other episodes.

If you like this episode, you can find all the past episodes at lispcast.com/podcast. There you'll find audio, video, and text transcripts of all the episodes. You'll also find links to subscribe in whatever way you want. Please do subscribe. That way you'll get all the future episodes.

You'll also find links to social media so you can ask me questions too. I hope to answer them soon.

This has been my thought on functional programming. My name is Eric Normand. Thanks for listening and rock on.

Transcript

Eric Normand: Why do I like Denotational Design? In this episode I hope to explore this really cool design process that uses Functional Programming. Talk about why I like it and also talk a little bit about the difficulties using it in a statically typed versus a non-typed language.

Hi, my name is Eric Normand and I help people thrive with Functional Programming. I was asked on Twitter by Scott Nimrod a really great question. I'll read the question. It's actually got two parts to it, but I'll read it. The question is "I'm curious to learn why you are a fan of denotational design, even though it's more challenging in a dynamic language like Clojure?"

That's got two parts really. The first part is, "Why do you like denotational design?" Let me address that one first. Denotational design is a cool design process. In fact, it is a design process that has borne real fruit. Denotational design is the process that has been elaborately developed by Conal Elliott. He used it to create Functional Reactive Programming, which you have probably heard of.

Functional Reactive Programming has boomed into several different reactive style paradigms. It's a real thing that helps make new things in the world. It does it at a very low level, very abstract. It creates a lot of blooming, like different languages implementing in different ways, offshoots, things that aren't FRP, but were inspired by FRP. That's functional reactive programming by the way.

He's currently using it to rethink and redesign deep learning, neural networks, and back propagation and all that. I'm looking forward to that. The work is still in progress so there's not much to report on yet, but there's talks out there.

I like that it's a real design process. As our industry matures we're going to need real processes for coming up with these novel things.

How do you actually model a thing in the world? We don't know how to do that. Not in a step-by-step process. Not in a way that we can collaborate well with each other on.

We're just kind of ad hoc, making things up, designing them on a whiteboard, drawing arrows and things. This is a step-by-step principle process. From my exploration of it, I think it's really cool.

I particularly like that the first step is kind of a "Let's step back from any notion of implementation," because we're doing design here. We don't want to jump in and say, "An image is an array of pixels."

That's too soon yet that's where most of us start. If we're going to start, we're bringing a lot of implementations baggage into our design before we've even thought about what we wanted to do.

I think that this is a very important step that we have to take ourselves. We say, "Oh, but it's got to be an array of pixels because that's the most efficient thing for the library I'm using, but Oh"

You're choosing a library already? Step back for a second and think. What is an image? What does it mean? What does it mean to have an image or to be an image? What does it mean?

Conal Elliott had a two-and-a-half-hour lecture. He goes over how he would design an image system, a graphic system.

He said in his system, "An image is just a function from a pixel location, so an X, Y coordinate to color, where X, Y are in the real number space. It is continuous. It's not like an integer, like an array index. It is real numbers. You can ask for any point in that X, Y space and get a color for it."

This is stepping way back and saying, "What does it mean to be an image?" You might disagree with that definition, but it is a much more abstract notion of image than array of pixels. That's what's important, is that he took that step back and said, "It's a mapping from location to color and I can represent that as a function." Functions are mappings.

I like that stepping back, going back to first principles, really rethinking the definition of something. Another step in this process...I will go over the process just after I talk about why I like it. The other thing is he uses algebraic properties and category theory. I think algebraic properties are a very good indicator that you are, "on to something" in the design.

When we're making design decisions, we need some assurance, something to follow to see if we're getting somewhere, if we're going in a good direction. Algebraic properties seem to be a pretty good way to do that. This is my opinion, of course, but it's why I like this process. I'm not sure what else you would choose besides something like developer ergonomics, or efficiency, or something like that.

Algebraic properties are a good gradient to follow to get to a good design. That's why I like denotational design. I've used it myself as a way of stepping back and thinking through what something means, and then bringing that forward into an implementation eventually.

I have watched and read as much as I've been able to find that was accessible to me and my level of understanding of category theory and stuff. I've watched as much as I could, read as much as I could on denotational design. I've even given talks where I've tried to come up with a process behind it, like a step-by-step process.

Unfortunately in all the material I've found, it starts off step-by-step. They'll say, "OK, this is the first step. This is the second step," then there's no more third and fourth, no two, three...It trails off into, "Isn't this stuff cool?", "Isn't this nice?", "Look where we've been." There's no more steps.

I've watched it and re-watched it, trying to see the steps, doing it myself and saying, "Where did the steps go?" This is my overlaying on him. This is not what he's calling denotational design. It's what I've managed to pull out of his material on denotational design. I would love to collaborate with him to come up with a concrete list of steps to do.

These are the four steps that I see that are clear enough. It started out as three. Then I've used it a bunch and I said, "No, there's a fourth step in here that I'm skipping." Steps are important because you need to be able to teach it. If you can't say, "Now, we're doing this step," someone might get lost. Feel like, "I'm doing what you are doing, which is just wandering around." No, I'm not wandering around. I'm going on this path, step-by-step.

This first one is to...I think of it as zenning out and forgetting all the implementation assumptions that you have made. Every time you've made let's say a computer graphics system, every computer graphics system you've made, and introspecting, meditating on it, what is an image? You just ask this very basic question and get at some fundamental idea.

You empty your mind, you go beginner's mind, and you just really try to think, what is an image? The same thing like, well, if I have a teacup, and I break it, is it still a teacup? That kind of question, very Zen Kōan kind of question. That's where you got to go. It's like a philosophical question about meaning. What does an image mean?

Then you do that, and you come up with a definition. Something that you can encode in your language. He uses Haskell. He often will use a function like image as a mapping, as a function from a point on a plane to color, so X,Y coordinate to color. Then color, what is color? You could ask that question. Or you could just cop out and just say, "Oh, it's just an RGB."

Whatever, whatever you do, but that image question is the one he was after so that's the one he does in the video. Then you explore this. You say, "Well, if I have this type, it's a function type, then what operations can I do on this type? Can I express all the things that I expect to be able to express? Can I translate the image?"

Yes, because I can make a function that takes an image and returns a new function that translates by adding the X and the Y and you can see how it's done. You just elaborate and explore that definition. What does this allow me to do? Can I find some minimal set of operations that other operations can be defined in terms of? You're exploring it a lot based on the ease of implementing.

How clear is it to implement this operation using the type that I have? This is something that I've added myself. I don't remember Conal ever talking about this, but design in other domains and other processes is very often incremental. You don't get it right the first time.

You have to go back and revise and you make an attempt in a certain direction, and you learn something, and you bring that back to the beginning. You start over and you make a new thing. In his presentations, it's usually very linear, which is a good way to present it.

I think it's important to remember that you might not get the image definition correct the first time. The best possible definition. You're going to have to step back again, with the stuff that you've learned and incorporate that into your definition.

The next thing that I see in his steps is that he starts to align things with category theory concepts. A lot of times, you'll say, "Whoa, that image is actually a functor." You see because we're doing this transformation like a translation or a rotation as a functor on an image.

You start to align these things with the category theory concepts, with algebraic, abstract algebra kinds of things. That helps guide and mold the design, the order of the arguments and things. You start to see, oh yeah, this will fit in here and this fits in there because category theory, one way of looking at it is a kind of an algebra of composition. How do functions and objects types, how do they fit together?

That's what that is. That's what you're doing, is you're trying to align it so that it fits and all these things. I actually think that that's a very fruitful way of finding some kind of gradient that you can follow to find the good design. There are others, like I said, developer ergonomics is one, performance is another. Those are things you could follow.

Then the interesting thing about this is that at this point he's implemented very little. He has not implemented a way to draw these images on the screen, for instance. He just has defined it as a type. Some functions are implemented because he can implement them easily in terms of other things.

I talked about the translate, it's very easy because you can just say, will translate returns a new image with the X and the Y added to this other X and Y. It just moves it. You haven't defined how to make one of these images yet, you just know the type. That's one of the nice things about type systems is they give you a thing you can reason about without actually having an implementation. That's pretty cool.

Then the final thing is actually implementing it. You have the design. You've gone through this incremental process of design. You've iterated on it multiple times, you've gotten it very clean and nice before you've implemented it. Now you know exactly what you need to implement. Often that implementation is very straightforward. Sometimes it's still hairy because you're dealing with another system.

What you need to develop and design...implement. Sorry, I don't want to use the word design.

What you need to implement is very clear. Well, I forgot what I want to say. All right. Let's talk about the second part. The second part of this question. I'll read the question again. I'm curious to learn why you are a fan of denotational design, even though it's more challenging in a dynamic language like Clojure.

Like I said, Conal Elliot uses Haskell for these things. Haskell as a static type system where category theory can be expressed pretty well.

The types let him reason about the meaning of the system, before he even implements it. Why closure? Why wouldn't I do this in Haskell? Is it more challenging in Clojure?

I want to start this off by saying I don't want to compare Clojure to Haskell in this episode. That's not what this is about. I'm trying to address this very specific question about whether it is more challenging in Clojure to do denotational design.

If I say anything about Haskell, I'm not saying it's bad. I like Haskell. It's a great language. I used to work in Haskell. Right now, I prefer Clojure, so you'll know my bias is there, but that's not what I'm talking about right now.

The first thing I want to say is for doing denotational design, Haskell isn't perfect either. It's challenging in Haskell. I'll name a few things that are either impossible or not really helped by Haskell. Haskell has no type for real numbers. Most languages don't. How do you represent a continuous number with arbitrary precision?

We don't have a good way of doing that, especially an irrational number. How do you represent that? That's the first thing. It's something that comes up in forums a lot. People are asked, "You say you want real numbers," but that's impossible. If you try to compare two irrational numbers, you never know if like one more digit, they're going to be different. That's a challenge. This is something that he thinks is important for the domain model, but it's not. It doesn't happen in Haskell.

Another thing is, when you're talking about say, the Monad laws or the Functor laws, associativity, you're talking about equality between two different expressions. If you're talking about commutativity, F of A and B = F of B and A.

What if A and B are functions and it returns a function? You can't do that equality comparison. Conal talks about this as well in his talk. He's like, "You know this is not valid Haskell." You cannot do equality comparison between functions and get the answer you're expecting.

If they happen to be exactly the same function like they're both assigned to this variable, they're both given the same name and you know A=A. Yeah sure, but if they were produced in two different ways, there's no way to compare the two functions to see that they're going to always have the same return value for all arguments. There is no way to do that.

Finally, commutativity and associativity, very important properties. There is no way the type system helps you with that. There's no way to express that in Haskell. When you develop, let's say, a monoid, because we're talking about associativity, you just have to trust that the function you give it is associative. The type system can't check that for you.

How do you check for equality between functions and for these algebraic properties? You could use property-based testing and Haskell has a good property-based testing system. Probably several. Clojure has one as well so we do the same thing there.

Now I want to talk about whether it's hard to do denotational design in Clojure. I do think it's a little harder than in Haskell, but I also think that most of the design part is happening in your head. You're making little notes, like "Oh, this type of this function. Type of image is this."

That helps you, but you're not running that program through the compiler to have it check it for you, because you don't have anything for it to run.

Maybe people have a workflow for testing out types that don't have any instances of them. You know what I mean? There's no function that has that type. Can you still write a program that runs it? I don't know, I'm not sure. If you could, then it would be easier in Haskell. What I'm saying is, most of the reasoning that's happening in your head. The Haskell type system that you're reasoning about in your head helps. I've been there. I worked in Haskell.

I have internalized enough of the Haskell type system. Certainly not all, OK. I'm not a Haskell expert or anything, but I've internalized enough of it that I can do that in my head even in Clojure. I do make notes about the types. I do make notes about, "OK, this is an image. It's going to take a X, Y coordinate and it's going to return a color." I do make notes about that.

Yes, it is a little bit more challenging, but it's 90 percent of the work is done in your head anyway, so it's not a big change. I have used spec a little bit to help put the design, but it's a little bit cumbersome to do it. I'm going to keep trying to use spec to do that. Spec is a Clojure library for defining the types, structure, and values of arguments that you're interested in.

I hope to find a way of doing that, of using spec to maybe even get a leap over Haskell, because it lets you automatically create generators for property-based testing. You're able to talk about the values because the checks happen at runtime. It's not as good about reasoning about functions. Clojure doesn't have currying, which is helpful. I still find that the process of stepping back, zenning out, exploring the definition, what things would look like.

Those two steps don't require the type system, don't require type checking. The construction along the lines of category theory abstractions, that does get into that category or that difficult stuff. Like I said, those very important algebraic properties aren't helped by the Haskell type system either, so I don't feel like Clojure is that far behind in it.

I do worry that it does require you to know the Haskell type system, in the sense that I've internalized it. A Clojure programmer would look at what I'm doing and say, "Oh, it's too much work. It's too abstract," whereas I'm just doing very concrete steps that I've learned through experience from the Haskell system. I'm a little worried about that, but I think we can work through it.

I hope that answers that two-part question. I'll read the question again. This question's from Scott Nimrod. Got it on Twitter. It's, "I'm curious to learn why you're a fan of denotational design even though it's more challenging in a dynamic language like Clojure."

To sum this up, I think that denotational design is a lot less about static typing than this question implies. It's a lot less about types. It's about design. It's about going back to first principles, building things up, understanding how things compose, and following a different gradient from what most people use when they design.

Most people use something like either performance — what they perceive would be faster — or developer ergonomics like, "Oh, look what I can do if I put the argument over here. It lets me write this which feels cooler. It's shorter."

Something like that, as opposed to algebraic properties, which I think they have precedence in reality. [laughs] Let's put it like that. Associativity's a pretty common property at least around my neck of the woods.

I hope that answers the question. Thank you so much, Scott, for the question. I know you have a few more, but I'll save those for other episodes.

If you like this episode, you can find all the past episodes at lispcast.com/podcast. There you'll find audio, video, and text transcripts of all the episodes. You'll also find links to subscribe in whatever way you want. Please do subscribe. That way you'll get all the future episodes.

You'll also find links to social media so you can ask me questions too. I hope to answer them soon.

This has been my thought on functional programming. My name is Eric Normand. Thanks for listening and rock on.