# Is FP just programming with pure functions?

This is an episode of The Eric Normand Podcast.

Subscribe:

As I develop and expound this theory, it may seem to be too complicated. Isn't functional programming just programming with pure functions? Why make this more complicated than that? We talk about my reasons and my goals for the theory.

## Transcript

**Eric Normand**: ...so if you've been listening to the episodes where I talk about the theory I'm developing — and the book I'm writing — you might be wondering, "Is this theory too complicated? Isn't functional programming just programming with functions? What's the point of this theory, anyway?"

My name is Eric Normand, and I am writing a book. And these are my thoughts on functional programming.

I want to address those questions, because someone did mention on Twitter that he believes — and he's a very smart person, has done his research — that functional programming — quite simply — means programming with mathematical functions. Pure functions. If you are not doing that — you're using some kind of effect, or something — then you're doing imperative programming.

Nothing wrong with either of them. Sometimes you've got to do imperative programming, and sometimes you've got to do functional programming. It's just as simple as that. The reason I don't think that is good enough as an answer...I mean, I think it makes a lot of sense. It's valid in itself. It's self-consistent. Functional programming is pure functions, programming with effects is imperative.

It makes total sense. The reason I don't think it works as a paradigm is that an imperative programmer — like someone who's not familiar with functional programming — would never make those distinctions. There's something about functional programming — or functional programmers — that they make this distinction between imperative and pure functions.

You have to capture that. It's not just that I'm using pure functions. It's also that I've made this distinction, and I've limited the amount of imperative code that I'm going to write. Isolated it out, and identified it as such. I'm going to do as much as I can, in the purely functional stuff.

It has to be part of your understanding — of the way you're going to code — that there is a distinction between the two. I don't think someone who's doing object-oriented programming makes that distinction.

I think even in the first episode of this, I talked about object-oriented programming as a paradigm. I tried to compare the functional paradigm to the object-oriented paradigm, in the sense that they're not doing the same things. There's no easy way to translate between the concept of object-oriented, and the concepts I laid out of functional.

This brings me to the idea that we should talk a little bit about the goals of this theory. I'm developing this theory, and the theory has to satisfy a few goals for it to be useful. One is, it has to be coherent and consistent with the way we actually do program. I hope I'm getting at something there, by dividing the world into those three domains of data, calculations, and actions.

Another is that it has to be generative. It has to explain, but it has to justify the complications of explanation, with it providing something new. It can't just be a good explanation that's really complicated.

I do think there's something nice and elegant about saying, "Functional programming is programming with pure functions," but it leaves out all the contributions to actions to the effect — the imperative side — that something like Haskell is providing. There's a lot of contributions there, in terms of how they compose, and interesting ones that will be easier to reason about.

We talked a lot about that already, in a previous episode. You don't want to leave that stuff out. You don't want to say sometimes you're doing functional programming in Haskell, and sometimes you're doing imperative. You want to say this is the functional contribution to imperative. That's what I'm trying to get at.

Some of the stuff I hope to get to in this series is the generative stuff. What new stuff does this way of explaining let us talk about? Let us think about? We've got to be able to have new ideas, because we've got this theory. Those ideas have to be good. [laughs] That's an issue, as well.

One of those is the idea that in three domains, you've got actions, calculations and data. If you compose an action with any of the other two things — if you compose an action with calculation, or you compose an action with data — then you have kind of infected the whole thing.

That thing you have built — by composing the two things — is now an action that affects — infects — everything they touch. I think this separation of stuff into domains allows you to even talk about that. The idea that there are some things called effects — that are called imperative programming — and there are some things that are pure functions. We call that functional programming.

It doesn't talk about the fact that you can compose the two, and what happens when you do. That gives a reason for having the isolation; for having a type in Haskell called I/O, that isolates all the effects, so that they're not infecting all your other code.

At least if you do infect your other code, you are well aware of it. Your compiler can reason that, "Yes, this has been infected. It's now an I/O." Another thing — I don't know if this is quite generative, but I think it might be. It's close — is all of the ways that actions could be decomposed, based on the time. Does it matter when it's called, or how many times it's called?

To talk about actions in that way is something you can imagine an imperative programmer doing. I don't know if there's anyone who would call themselves an imperative programmer. I'm using that as a term, just to have something to call non-functional programmers.