The 3 levels of functional thinking

I've noticed that people go through a certain journey when learning functional programming. I've classified it into three levels: 1) Distinction between Actions, Calculations, and Data; and learning to use them effectively 2) Higher-order thinking; and building abstractions from higher-order functions 3) Algebraic thiking; building coherent models with a focus on composition. This is a work in progress and I'd love your input.

Transcript

Eric Normand: What are the three levels of functional thinking?

In this episode, I'm going to talk about my thinking about progress through the skills and thought processes that go into functional programming. My name is Eric Normand, and I help people thrive with functional programming.

I want to say that this is a work-in-progress. It is one way of mapping out the skills and categorizing them as a progression of skills. It's not the only way, and I have no hard evidence about the skills being done in this order.

They are mostly anecdotal. I'm noticing that people might learn a bunch of stuff and then they get stuck or they're in a certain spot, and they're still progressing. They haven't learned this other thing yet. It's just me putting it together.

I'm not creating some model that people are going to have to stick to or anything. It's mostly a way to organize the material that I'm putting into my book.

Here are the three levels. Remember, work-in progress. I'd love to discuss it, but I'm not going to die on this sword or anything.

The first one is the awareness and use of the distinction of actions, calculations, and data. Actions are things that depend on time. They depend on when they're run and how many times they're run. They have effects on the world or are affected by the world.

Calculations are computations from inputs to outputs. They don't depend on time. If you give them the same inputs, they're going to give you the same output. Finally, data is facts about events. It's very inert. It doesn't do anything on its own or requires interpretation.

When you're in this first level, your main challenges are learning, with actions, how to deal with the time, how to manipulate time, to master it, so that you can guarantee the ordering of the actions when you need it guaranteed.

You can guarantee the things aren't running at the same time if they shouldn't be running at the same time, and guarantee that they happen the correct number of times. These are all the challenges that you face when you're dealing with actions.

Calculations, the challenge here is to start modeling your program in terms of things. It can be very difficult for people who are coming from another paradigm to not use mutable state, to model things more as data transformations as opposed to step-by-step instructions like in an algorithm.

You're learning to think about all the stuff that your program does that isn't really necessary to be done as a side effect, as an action. There are some side effects that are necessary. You want your program to send an email, that it's incorrect if it doesn't send an email. That isn't a necessary action.

Do you really need to use that global variable as scratch space for your algorithm? Probably not. If you don't use it, none of your users are going to be upset. It's still a correct program. That's an unnecessary action.

We, as functional programmers, tend to frown upon unnecessary actions, and we want to convert them into calculations. That's the challenge, learning how to do that. Sometimes it is relearning how to program even the simplest things using calculations instead of actions.

With the data, it's about modeling. It's about making sure that your data has the right structure to be able to support the algorithms that you need to do. It's capturing the data you need. All that stuff comes under data modeling. Those are the three things you're distinguishing as a functional programmer at level one. You're learning to work with that.

You keep learning and you eventually get to level two, which is where you have higher-order thinking. You've mastered doing stuff with immutable data and thinking of things as data transformation, and you start to realize that there's a lot of duplicated functionality.

You've been using, let's say, for loops to make lists of things from other lists. You think, "Well, I could be doing this with a function that I pass the function to." I'm essentially going to pass the body of the for loop as a function into Map. This is higher-order thinking. You start thinking in terms of pieces of algorithms that can be passed to other algorithms.

That's another challenge that you come to. Some of the challenges, you could think of like dragons that you have to avoid. It's over abstracting. It's very easy to get carried away and have unreadable code because you're using second or third order functions. It's very difficult to see what's going on.

Then there's not using them enough and running the risk of not having a scalable piece of software, like your code does not scale. You have to write the same number of lines for every feature. A feature takes a 100 new lines of code. You need 20 features. That means you're going to need 2,000 lines of code.

Whereas, if you're using higher-order thinking, it should be easier to write the features each time, fewer lines of code because you're able to find essential abstractions that work in your domain.

There are some that are universal. Map, Filter, and Reduce are very common. You should be able to find some that work only in your domain that don't make sense to go in the standard library. This is number two — higher-order thinking. That's where people start thinking in terms of functional programming as just like data transformation pipelines.

I'm doing Maps and Filters, and Maps and Filters, and I have these pipelines where all this work is getting done through the sequence functions. There's another level. I've met a lot of people who get comfortable at level two and that's where they stay.

There's not a lot written about level three. The stuff that I've seen that's written about it is often very abstract and obtuse. It's abstract because it's at next level so it's going to seem out there. You can get there. I hope to help people get there. I hope to find a path that's not too abstract that gets people there.

We should be able to do this in my book. This is going to be, obviously, later in the book. I haven't gotten to it yet. This is level three, which is algebraic thinking. I don't even have a good definition of it, a good explanation for it. This is one of those things where I'd love to get into discussions with people.

This is where you are focused on building models that compose nicely. Very few corner cases, is what I mean by nicely. They compose well. When you compose them, they have nice properties to them. You're able to build a semantically complete system of interworking concepts.

You're using everything from levels one and two to build something cohesive that operates in the abstract concepts of your domain.

When you're talking about a data transformation pipeline, you're often looking at, "OK, I'm getting this CSV and it's got these values in it. I need to change it into something I could send to this JSON:API." It's slightly different format so I got to transform it.

You're thinking very mechanical, very data-specific. What are the steps that I can go from this thing that I have to this thing that I need? It's very important to do, but I'm talking about something like being able to build a...

Let's say you're making a video editor. Now, you're going to want to model how the video editor will work on the backend. What are the concepts? How do they fit together so that they create a cohesive whole?

You can think of it as I should be able to concatenate two segments of the video. I should be able to cut a segment of the video. That means I'm going to have to model the segment. I'm going to have to model this operation of concatenating, which will give me a new segment, but it's got the two combined in it.

I should be able to cut, which gives me the two segments, the before and the after segment. There's a reasoning about the operations on these things, and how those operations can compose together to build the functionality that you will eventually need in your app.

Those are my three levels. Like I said several times already, these are a work-in-progress. Let me know what you think. Is there a fourth level that I'm missing? I haven't reached the end of my functional programming journey so I'm probably not aware of what comes after this.

I'm definitely myself in three. I know other people there and I know people in the other two. I have some anecdotal evidence that this thing makes sense. I'm going to wrap it up. I'll recap the three levels that I'm identifying mostly to organize my curriculum in the book so that there's a sense of progress.

One is the distinction between actions, calculations, and data, and how to use them effectively. Two is higher-order thinking. This is where you're using first-class functions, where people start to talk about data transformation pipelines, things like that.

Three is algebraic thinking. This is where you're doing domain modeling at the operation level to be able to create a cohesive domain model in functions.

If you want to find all the old episodes, all future episodes, and this present episode, go to lispcast.com/podcast, and there you'll find a list of all the episodes with video, audio, and text transcripts. You'll also find links to subscribe and links to my social media so that we can get in touch if you want to do that.

I love having discussions, and I would love to have something like this more fleshed out based on a more solid ground. Awesome. My name is Eric Normand, and rock on.