The Eric Normand Podcast
What is Signature-Driven Development?
Signature-Driven Development means starting with function signatures before you implement them. I also discuss why we implement the hardest function first.
What's the problem with using arrays for pizza toppings?
I discuss why arrays aren't great for representing pizza topping selection.
Is deferring decisions about our domain a good idea?
I wonder when to deal with business rules. Do they belong in the domain layer?
Can domain modeling be taught?
I answer a listener's questions about whether domain modeling is a skill that can be taught.
Why domain modeling?
We explore why focusing on the domain model can improve your software quality.
How do we evaluate a data model?
We talk about how you can evaluate the two parts of a domain model.
What is a domain model and how do we think about them?
What is a domain model and how do we think about them?
When do we want to refer to things by name?
In a domain model, when should we refer to things by name, and when should we nest a subcomponent?
Collections in domain models
When do we use collections in domain models, and how do we think about the states they represent?
Layout of Domain Modeling book
In this episode, I talk about the three parts of my book, which mirror the three levels of domain modeling.
The power of runnable specifications
I talk about the advantages of writing a spec directly in your production language.
What is a domain model?
In this episode, I continue the exploration of the definition of domain model to serve as a base layer of understanding to write my next book.
What is a high-level language?
We've all heard the term _high-level language_. Initially, it referred to the step from assembly languages to compiled languages. But it has another definition, which has to do with how well the language lets you think.
How is Smalltalk so small? Four rewrites.
Is the abstract stuff at the top or the bottom?
I explore a new perspective about what abstraction means and how it can cause problems.
The Christopher Alexander Effect
Why does some design advice work for some people, but not for others? And why do some agile practices work for some people, but not for others? I call that The Christopher Alexander Effect and explain how it works.
My feelings about static vs dynamic typing
Can't we all just get along?
Computer Science as Empirical Inquiry: Symbols and Search
In this episode, I excerpt from and comment on Allen Newell's and Herbert Simon's 1975 ACM Turin Award Lecture.
How far can we stretch technical debt?
Technical debt is a metaphor used to explain the tradeoff we all face when we have a deadline. How much is it worth to rush the code out the door? It's a good metaphor, but the term is often used these days to mean 'code I don't like'. In this episode, I examine the parts of the metaphor and ways in which technical debt differs from financial debt.
How to avoid premature optimization?
I explore why clean code is a lagging indicator and how the domain model is a leading indicator of maintenance cost.
What is domain modeling?
I begin exploring the process of domain modeling with a definition.
Computer Programming as an Art
I read from the 1974 Turing Award Lecture by Don Knuth.
Programmer as Navigator
We read and discuss the 1973 ACM Turing Award Lecture by Charles W. Bachman.
The Humble Programmer
We read from and comment on Edsger Dijkstra's 1972 Turing Award Lecture called The Humble Programmer. Is the problem with programming that we don't recognize our own limitations? We'll explore that and more.
What's the relationship between abstraction and generality?
Do abstract and general mean the same thing? I don't think so. I've actually stopped using the term 'abstraction' because it's so laden with semantic baggage. We explore what they do mean in different contexts, and why abstract is not a relative term.
Why is data so powerful?
In this episode, we explore why Clojure's stance of not wrapping data much is so powerful in the world we live in.
What if data is a really bad idea?
In this episode, I read from and discuss a comment thread between Rich Hickey and Alan Kay.
On the criteria to be used in decomposing systems into modules
In this episode, I read from David Parnas's important paper on modularity.
What is missing from Stratified Design?
In this episode, I explore the notion of fit and how it is missing from the Stratified Design paper.
Generality in Artificial Intelligence
In this episode, I read and comment on excerpts from John McCarthy's 1971 Turing Award Lecture.
Some Comments from a Numerical Analyst
In this episode, I read and comment on an excerpt from the 1970 Turing Award Lecture by James Wilkinson.
Don't overcomplicate the onion architecture
When using the onion architecture, you need to consider the dependencies (actions depend on calculations), but also you need to consider the semantic dependencies (the domain should not know about the database).
Is Haskell the best procedural language?
Functional programming is a mindset that distinguishes actions, calculations, and data. That's where it derives its power. Simply applying the discipline of 'only pure functions' lets you programming using a procedural mindset and still think you're doing functional programming.
Do forces really exist?
Force is an important concept in Newtonian mechanics. But do forces really exist? In fact, it is an abstraction invented by Newton. The insight revolutionized physics and universalized his model. What can we learn from it?
Could we build Newtonian mechanics on purpose?
One of the greatest domain models ever built was Newtonian mechanics. Why did it take physics, as a field, thousands of years to figure it out? What can we learn from Newtonian mechanics to help us model our own domains?
How is domain modeling related to Starbucks?
We discuss two phases of domain modeling, one easy and one difficult.
Is design a noun or a verb?
If design is a false nominalization, then we should look at the process of design instead of pontificating on what makes good design.
Has software design taken a wrong turn?
I analyze two similar definition of software design, one from OOP and one from FP. We see that each is talking about making the code more flexible. But design shouldn't focus on the code alone. In this episode, I explore what it should focus on instead.
Form and Content in Computer Science
In this episode, I excerpt and discuss the 1969 ACM Turing Award Lecture by Marvin Minsky.
One Man's View of Computer Science
In this episode, I read from One Man's View of Computer Science, the 1968 ACM Turing Lecture by Richard Hamming.
Computing Then and Now
In this episode, we read excerpts of Maurice Wilke's 1967 ACM Turing Award lecture titled 'Computing Then and Now'.
The Synthesis of Algorithmic Systems
In this episode, I read excerpts from Alan Perlis's Turing Award Lecture called 'The Synthesis of Algorithmic Systems'.
Is Clojure a language for hipsters?
In this episode, I contemplate whether I am an early adopter or a pragmatist, and how that influenced my choice of Clojure. Does Clojure appeal to early adopters? Has it crossed the chasm?
Lambda: The Ultimate GOTO
In this episode, I read from Lambda: The Ultimate GOTO. We learn whether avoiding GOTOs makes your code better and how to make function calls fast.
Can Programming Be Liberated from the von Neumann Style?
In 1977, John Backus presented an algebraic vision of programming that departed from the von Neumann fetch-and-store semantics. This seminal paper has influenced functional programming in many ways. In this episode, I read excerpts from and comment on John Backus's 1977 Turing Award Lecture paper Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs.
Do we use metacircular evaluators in real life?
In SICP, the authors define a metacircular evaluator of Scheme written in Scheme to change the semantics of the language. Do we do stuff like that in real life? In this episode, I explore this listener question.
The Next 700 Programming Languages
In this episode, I excerpt and comment on a seminal paper in programming language design, from all the way back in 1966, called The Next 700 Programming Languages.
What makes some API's become DSL's?
What causes an API to cross the line into becoming a DSL? Is it really a 'I'll know it when I see it' situation? I've been searching for an answer for years. And I think I found it in a paper I read recently for this podcast: Lisp: A language for stratified design. In this episode, we go over the main factor that makes an API a DSL: the closure property.
What is software design?
I've never been satisfied with the standard definition of 'software design'. Is the term useful? What could it mean that is useful? In this episode, I talk about some definitions that I don't agree with and explain my definition.
Why Functional Programming Matters
In this episode, I read excerpts from Why Functional Programming Matters by John Hughes. Does it answer the question of what is functional programming and why is it powerful?
My response to Out of the Tar Pit
Out of the Tar Pit came out 14 years ago and it was a big influence on my thinking. I've thought a lot about it and I want to share some extensions and refinements of the ideas in the paper. Specifically, I hope to present a more objective definition of complexity and refine the idea of Essential vs. Accidental complexity.
Out of the Tar Pit
In this episode, I read excerpts from Out of the Tar Pit, a classic paper in the functional programming community.
What is software architecture?
I try to define software architecture, both in the large and in the small.
The Early History of Smalltalk
We read one of the great articles by Alan Kay, inventor of Smalltalk.
Lisp: A language for stratified design
In this first episode of season 3, we analyze a great paper called Lisp: A language for stratified design.
Year-end update 2019
I'm taking a break to retool for Season 3, which will start in the new year. I also give an update on Grokking Simplicity. I am working on Chapter 7.
Are monads practical?
Bruno Ribeiro asked a great question about the practical uses of monads. Are they useful? Why are they used so much in Haskell? In this episode, we briefly go over the history of monads in Haskell and how they allow you to do imperative programming in a pure functional language.
Where does structural similarity come from?
In a recent episode, I said structural similarity comes from the algebraic properties of the relationships between things. But that's not the case. Rotislav mentioned in the comments that it actually comes from the structure in the relationships. I explore that idea in this episode.
Do you need immutability for functional programming?
Of course immutable data structures are great, but are they necessary for FP? Short answer is: no. There's a lot of functional ideas and perspectives that can be applied even if you don't have them. And you can always make things immutable through discipline. In this episode, we explore those two ideas.
Algebra is about composition
When we look at the definitions of algebraic properties, we often see that we are defining how things compose. This is one of the main advantages of using algebraic properties to constrain our operations. If we define how they should compose before we implement them (as a unit test, for instance) we can guarantee that things will compose.
What do product and sum types have to do with data modeling?
Product and sum types allow us to exactly model any number of states with a lot of flexibility.
Can you have a clean domain model?
I was asked a great question by a listener about whether it's always possible to find a good domain model. Sometimes, the business rules are so messy, how can we find something clean and stable? In this episode, I explore how we can find a stable and clean domain model within the chaos.
What is abstraction?
We use the term 'abstraction' all the time. But what does it mean? If it's such an important concept, we should have a clear idea of its meaning. In this episode, I go over two related definitions from two important sources.
Why does stratified design work?
Stratified design is one where you build more specific things on top of more general things, typically with many layers. But why is this powerful? In this episode, we explore why it's sometimes easier to solve a more general problem than a specific one.
Why are algebraic properties important?
We often write software to automate an existing physical process. But what makes this possible? When translating from physical to digital, something must be preserved. In this episode, we look into what is preserved across that translation and why algebraic properties might help us find it.
Functional programming is a set of skills
Definitions of functional programming disagree with each other and often don't encompass the broad range of languages and practices we find in industry. The definitions also make it seem incompatible with other paradigms, such as object-oriented and procedural. However, if you look at FP as a set of skills, both problems are solved. We can find skills that functional programmers tend to use and not be incompatible. In this episode, I explore some important, high-level skills that functional programmers employ.
The commercialization of computers
Computer commercialization set research back decades and we still haven't recovered. I explore why that is and end on some hopeful notes.
Two kinds of data modeling
Through conversations, I've realized that there's two kinds of data modeling that are distinct. They have their own constraints and needs and, consequently, their own techniques. In this episode, I explore what makes them different.
What are product and sum types?
Product and sum types are collectively known as 'algebraic data types'. These are two ways of putting types together to make bigger types. Product types multiply their states, while sum types add them. With these two 'operations', we can precisely target a desired number of states. In this episode, we explore how we can eliminate corner cases with algebraic data types.
Why do I prefer Clojure to Haskell?
I prefer Clojure to Haskell. It's probably mostly due to accidents of history: getting in Lisp at an earlier age, spending more time with Lisp, and only having one Haskell job. But I do have some philosophical differences with the Haskell philosophy as it relates to Haskell as an engineering tool. In this episode, I go over four of them, and try to relate them with anecdotes.
Why do I like Denotational Design?
Denotational Design is a abstraction design process created by Conal Elliott. I like it because it really asks you to step back and design the meaning of the abstractions before you implement them. In this episode, I talk about why I like it, what it is (step-by-step), and why it's not about static types.
What is the difference between a domain model and business rules?
Business rules are different from your domain model. What goes where? I hope to tease apart this important yet subtle distinction in this episode.
Where does the power of nil punning come from?
Nil punning does give power to Lispers. But where does the power come from? Is it that nil really is a great value? Or is it more about the design choices made? In this episode, we explore this question.
What is Nil Punning?
Nil punning is a feature of Lisp. It began when nil was both the empty list and the false value. Two meanings for the same thing led to the idea of punning. Lispers liked it. And now, Clojure carries the tradition forward. In Clojure, nil has different meanings in different contexts. Is it good? Is it bad? We explore it in this episode.
What is the Curse of Lisp?
What happens when your language is so powerful that small, independent teams can solve their problems without libraries? Does everyone flock to it? Or do you just get a lack of libraries?
What is an abstraction barrier?
Structure and Interpretation of Computer Programs talked about abstraction barriers as a way to hide the intricacies of data structures made out of cons cells. Is this concept still useful in a world of literal hashmaps with string keys? I say, yes, but in a much more limited way than before. In this episode, we go into what abstraction barriers are, why (and why not) to use them, and the limits of their usefulness.
In the onion architecture, how do you make business decisions that rely on information from actions?
I've gotten several questions about how to do X or Y in the Onion Architecture. It seems like giving the architecture a name has miscommunicated how simple it is. It's just function calls that at some point are all calculations. In this episode, I try to deconstruct what makes the onion architecture work. Spoiler: it's just function calls.
Can you use types with Data Orientation?
Are types compatible with data orientation? The short answer is 'yes'. Types trade freedom of movement for clarity.
What is the benefit of data orientation?
Data orientation allows freedom of movement between layers of meaning. Each interpretation adds a layer of meaning. If the data were hidden, we would not be able to freely interpret it how we want. In this episode, we explore an example of what it means to move up and down the layers of meaning.
What is Data Orientation?
We often talk about data orientation in functional programming circles. It basically means programming with data, without hiding your data. Our software is information systems, so why not treat the data in the raw? In this episode, we dive into what is data, what data orientation is all about, and how you program with it.
What is a total function?
Total functions are functions that give you a valid return value for every combination of valid arguments. They never throw errors and they don't require checks of the return value. Total functions help you write more robust code and simpler code, free from defensive checks and errors.
What is a continuation?
Continuations are a cool feature you often find in functional languages. In this episode, we look at what they are, why you're probably already using them, and the cool things you can do with them.
What kind of software is functional programming not suited for?
Functional programming cannot be suited for everything, right? Well, let's not be so sure. Functional programming, like imperative programming and object-oriented programming, is a paradigm. It is a fundamental way to approach software. There are many languages that fall within the umbrella of FP. It is quite possible that everything we know how to do in software, we know how to do well with FP, the same way we know how to do it with imperative and with OOP.
Grokking Simplicity Launch
My new book, Grokking Simplicity, all about functional programming, is now available in early access. The first three chapters are ready to read. Go to /gs, add the book to the cart, and use discount code MLNORMAND for 50% off.
Monads in the real world
Monads are real, y'all. They are all around us. In this metaphor-free episode, I'll share two real-world monads you interact with all the time. No burritos or space suits, I promise! Plus, we'll see why monads are useful in Haskell.
What is the difference between parallelism and concurrency?
My favorite definitions of parallelism and concurrency come from Brian Goetz. They are not the traditional ones, which focus mostly on # of cores. In modern computing, we are sharing so many resources, parallelism and concurrency need to account for that. In this episode, we go over those definitions.
How do you develop algebraic thinking?
A few people have asked me how to develop Level 3 thinking. I'm not sure. But I've got some directions to try. In this episode, I go over 3 and a half ways to develop algebraic thinking.
What is an algebra?
Level 3 of functional thinking is all about algebraic thinking. But what do I mean by algebra? In this episode, I try to distill down the characteristics of an algebra and explore why algebras are worth developing.
What is a calculation?
Level 1 of functional thinking is to distinguish between actions, calculations, and data. But what is a calculation? In this episode, we go over what it is, how to recognize them, and how to implement them. By the end, you should understand why they are so important to functional programming.
What is so great about object oriented programming?
Because I promote functional programming, people often take what I say to mean that I don't like OO. However, I think there's a lot of cool stuff in OO. In this episode, I go over three or four things I think OO does really well.
Why should you throw away all of your code?
You should throw away your code and try again, because it will make you a better programmer to try the same problem multiple times. Each time you can try a new style or approach to solving it. That's how you get better.
What is Data Modeling?
Data Modeling is a common technique in functional prgramming. It means capturing the essence of the concepts of your domain, their attributes, and their relationships in data.
What is an action? (better edit)
Functional programmers divide up their code into three categories: actions, calculations, and data. Actions are everything that can have an effect on the world, or anything that can be affected. In this episode, we go deep into what it means to be an action.
What is tail recursion?
Tail recursion is a kind of recursion that won't blow the stack, so it's just about as efficient as a while loop. Unfortunately, not all platforms support tail call removal, which is necessary for making tail recursion efficient. We talk about what it is and how to do it, even if your language doesn't support it.
What is memoization?
Memoization is a higher order function that caches another function. It can turn some slow functions into fast ones. It saves the result of a function call after the first time to the cache, so if you call the function again with the same arguments, it will find it in the cache.
How does making something first class give you power?
Often, functionality starts off as code. It's if statements and imperative ideas. But, over time, you notice patterns. Those patterns can be reified as data. And that gives us tremendous power. How so? We explore it in this episode.
Is there a silver bullet for software? (part 2)
We continue this exploration with the notion that maybe practices such as Agile, Design Thinking, etc., give us a means for eliminating essential complexity that we thought was necessary. Do we really need to implement everything? Maybe not.
Is there a silver bullet for software development? (part 1)
In The Mythical Man-Month, Fred Brooks argues that there is no improvement that can give us an order of magnitude increase in productivity. His main point is that most of what's left to improve is essential complexity. But is that true? Can we throw in the towel and declare there is nothing left to improve?
Why getters and setters are terrible
Getters and setters kick the domain modeling can down the road. They leave the real design work to some other part of the code. They don't do enough to protect the semantic integrity of the object. They're terrible.
Why taming complex software?
My book is called Taming Complex Software. What's that all about? In this episode, I go into why complexity is a major problem and how functional programming can help.
3 Examples of algebraic thinking
In a recent episode, I said algebraic thinking was the third level of functional thinking. In this episode, I give some concrete examples.
What is a higher-order function?
Higher-order functions are used a lot in functional programming. They are functions that take other functions as arguments or return functions as return values, or both. You'll probably be familiar with map, filter, and reduce, which are higher-order functions. In this episode, we go kind of deep on them.
The 3 levels of functional thinking
I've noticed that people go through a certain journey when learning functional programming. I've classified it into three levels: 1) Distinction between Actions, Calculations, and Data; and learning to use them effectively 2) Higher-order thinking; and building abstractions from higher-order functions 3) Algebraic thiking; building coherent models with a focus on composition. This is a work in progress and I'd love your input.
What is functional thinking?
My book is coming out soon in early access. It's called 'Taming Complex Software: A Friendly Guide to Functional Thinking'. But what is 'functional thinking'? In this episode, I explain the term and why I'm no longer redefining 'functional programming'.
We make information systems
I often have to remind myself that we make information systems when we are programming for a business. My natural tendency, due to years of education, is to begin by creating a simulation. But this is not what we need to do. Instead of simulating a shopper going through a store, we need to model the information transfers that were captured in the pre-digital information system of pen-and-paper inventory and order management.
How to distinguish between commutativity and associativity
You confuse them a lot, and it's not your fault. They are similar in that they are both about order. Associativity is about the order of operations, and commutativity is about the order of arguments. We go through some examples to help clear up the distinction.
Why side-effecting is not all bad
We run our software for its effects, so effects are necessary. We can't write 100% pure code. But I contend that some effecting code is better than others. In other words, there is a spectrum from bad effecting code to good effecting code. Even if you can't turn an action completely into a calculation, you should still strive to minimize implicit inputs and outputs.
What is an inverse, and why is it useful?
Inverses are everywhere. They let us undo an action. For instance, I can open a door and close it. Why do we want to do this? Because there are things I can do with the door open, and other things I can do with it closed. We need this same flexibility in our computer programs.
What makes a repl?
There's a lot of discussion on Twitter about whether Node has a repl or Python has a repl. Do they have repls? How can we tell? Well, my opinion is that what's important is how it's used, not a set of features.
How is Haskell faster than C?
Haskell is very competitive with C, and on some benchmarks, it is faster. How is that possible? With all that Haskell does on top of the raw C code, how can it possibly be faster? In this episode, I talk about two advantages of Haskell that can make it faster than C.
What is a functor?
Functors are an operation that has a structure preserving property. But what is that? Are these things practical? Does it have anything to do with the real world? Of course! To be useful, it must derive from real-world things we see all around us. This one is an assembly line. How? That's what this episode is all about.
Why am I podcasting about functional programming?
I received a negative YouTube comment. Normally, I ignore those, but this one insulted you, my audience. So I address it. Why am I podcasting about functional programming? What teaching techniques do I employ to help people learn?
Is your layer of indirection actually useful?
There's a cliche: any problem can be solved with another layer of indirection. That's true, but does your brilliant idea for a new layer of indirection actually solve a problem? In this episode, we explore this question and develop a rule of thumb for evaluating layers of indirection.
What a monoid is and why monoids kick monads' butt
Everyone talks about monads but monoids are where it's at. Monoids are simple and make distributed computation a breeze. In this episode, we learn the two properties that make an operation a monoid, and how to use it do distributed computation.
How do you implement lazy evaluation?
Lazy evaluation is easily implemented in any language that can create first-class computations. That means functions or objects. In this episode, I explain how to implement a Delay, which is a reusable lazy component that is common in functional programming languages.
What is lazy evaluation?
Lazy evaluation is a common technique in functional programming for separating two concerns: how you generate a value from whether/when you actually generate it. We look at the two different kinds of laziness and the benefits it gives you.
How is recursion like a for loop?
People think recursion is hard but it's no harder than a for loop. In fact, it's got the same parts, they're just not laid out in the same way. In this episode, we look at how you can spot those three parts in any recursive function.
Why do programmers put up with so much pain?
If you're using a less popular language, it may seem like there is a ton of pain there. But there's pain everywhere. Every stack has its own problems. The key is you need to pick the pain you want to live with.
Can you always find a layer of meaning in which your problem is easier?
I've always found switching languages to be educational. I learn a lot. It always makes me wonder what I might learn from a non-existing language that I would bring back to my favorite languages.
What is point-free style?
Point-free style is a way of defining functions with a very simple constraint: you cannot name arguments or intermediate values. How can you possibly do that? Well, with higher-order functions, of course. For instance, with function composition, you can define a new function without naming the arguments. Some languages, like the APL family, or Haskell, let you do this very easily.
What is referential transparency?
Referential transparency is a term you'll hear a lot in functional programming. It means that an expression can be replaced by its result. That is, 5+4 can be replaced by 9, without changing the behavior of the program. You can extend the definition also to functions. So you can say + is referentially transparent, because if you call it with the same values, it will give you the same answer.
Why you shouldn't hide your data
In OOP, we wrap our data in an interface, which is called implementation-hiding or data-hiding. In functional programming, we don't do that. We use our data in the nude. We pass the data around and allow the context to interpret the data as it seens fit. In this episode, we look at this significant difference between OOP and FP and how to do it.
What are higher-order functions?
Higher-order functions are functions that take a function as an argument and/or return a function. We use them a lot in functional programming. They are a way to define reusable functionality, as we do with map, filter, and reduce.
What is function composition?
Function composition is taking the return value of one function and passing it as an argument to another function. It's common enough that functional programmers have turned it into its own operation. In this episode, we go deep into why it's important and how you can use it and write it yourself.
What does it mean for a function to have a zero?
Some functions have a special value that will stop computation. For instance, multiplication will stop if you multiply zero by anything. We can use this property to our advantage.
What is a function's identity?
Some functions have identities, which are values that tell you where to start calculating. In this episode, we look at what identities are, some examples of them, and how you can use them in your own code.
Promises are more popular than ever. They make our code better than callbacks. But why? In this episode, I dive deep into why promises are better than callbacks, and it's not just about indentation.
What are first-class functions?
First-class functions are functions that can be treated like any other value. You can pass them to functions as arguments, return them from functions, and save them in variables. In this episode, we talk about why they are important for functional programming and what features we require of them.
Where to find time to learn functional programming?
It can be really hard to find time to learn a new language or new paradigm. How can you find the time you need? In this episode, I share 5 tips for setting yourself for success when you're learning functional programming.
Do locks slow down your code?
Yes. Locks slow down your code. But they enable your code to be correct! It's a tradeoff, but who would ever trade correctness for a little speed? In this episode, we look at the tradeoff, how to make locks less of a speed tradeoff, and some alternatives.
What is idempotence?
Idempotence means duplicates don't matter. It means you can safely retry an operation with no issues. The classic example is the elevator button: you press it twice and it does not call two elevators. We explore why we would want that property in an email server.
What is commutativity and why is it so useful in distributed systems?
Commutativity is an algebraic property that means that order doesn't matter. Because network messages arrive out of order, it's the perfect property for distributed systems. In this episode, you'll learn what it is (with some real world examples), why it's useful, and 3 ways you can make an existing operation commutative.
What is associativity and why is it useful in parallel programming?
Associativity is an algebraic property that enables us to easily break up a job into smaller jobs, do the jobs, then recombine the results. Associativity is the essence of composition. In this video, we go over what associativity is, why we want to use it, when to use it, and 3 keys for making an operation associative.
What are timelines and what do they have to do with functional programming?
Timelines are a system I developed for modeling time in a distributed system. You will find timelines whenever you have multiple machines, multiple processes, multiple threads, or multiple async callback chains. Since virtually all software is distributed these days, modeling time is more important than ever. Functional programming may not have all the answers, but it is asking the right questions. In this episode, we go over what timelines are and how you can start to use them to model time in your software.
Cheap or free functional programming for your team
Hiring an on-site trainer can be expensive. But training itself doesn't have to be expensive. In this episode, we go over 7 ways you can start training right away without breaking your budget.
What is recursion and when should I use it?
Recursion is associated strongly with functional programming. We do use recursion more than imperative programmers do. But we also use iteration. In this episode, we talk about what recursion is, how to use it, when to use it, and when not to use it.
What are side-effects?
In functional programming, people often use the term side-effect. But what does it mean? Side-effect is any external effect a function has besides its return value.
What are concurrency and parallelism?
What are concurrency and parallelism? What's the difference? Concurrency is functional programming's killer app. As we write more and more distributed systems on the web and on mobile, the sharing of resources becomes a major source of complexity in our software. We must learn to share these resources safely and efficiently. That's what concurrency is all about.
What are race conditions?
What is a race condition? We look at what causes race conditions and some ways you can avoid them.
What are pure functions?
What are pure functions? I explore the definition, the term itself, and why functional programmers like pure functions.
How to apply the Onion Architecture
I got a lot of questions about how to apply the Onion Architecture to particular situations. In this episode, I try to answer it with a specific example.
How do you create a semantic base layer?
In stratified design, we are looking for layers of meaning, each one implemented on top of the last. But how do you go about building those in an existing codebase? While it remains more of an exploration than a step-by-step method, we can still describe some techniques that help find them. In this episode, I talk about four of them.
Tension between data and entity
There is always a tension in our programs between raw data and meaningful information. On the one hand, data is meaningless alone. On the other, we want to treat it as a thing with real semantics that constrain its usage. How do we live with both of these at the same time?
Is React functional programming?
React (and other frameworks like it) will re-render their components when the data for those components change. Is that functional? If it were pure functions, then things would only render when you called it. It's being affected by changing data. My contention is that it's not really functional (as in pure functions). It's reactive programming. But reactive borrows many ideas from functional.
What is Event Sourcing?
Event Sourcing is an architectural pattern that shows up in many mature information systems. This is the fourth in my three-part architecture series.
Is there always a way to implement an algorithm without mutable state?
It's tempting to use mutable state in your algorithm. It's so convenient! And we're so used to it, if we come from an imperative paradigm. But we must remember that there is always a way, even if it's not immediately obvious. I go over two ways to implement an algorithm without mutable state.
What is the universal process pattern?
Part 3 of the functional architecture series. The universal process pattern is a schematic representation of software. For a software process to be useful, it needs input, it needs to calculate something from that input, and it needs to have some output or effect on the world.
What is the onion architecture?
Part 2 of the functional architecture series. When we're structuring our functional software, we want to isolate the actions from the calculations. We can do that using the Onion Architecture, which has layers like an onion. The center of the onion is your domain model, then around that are your business rules. Finally, around that is your interaction layer, which talks with the outside world, including the database, web requests, api endpoints, and the UI.
More about Stratified Design
Part 1 in the Functional architecture series. The Stratified Design, which I called "layered design" before, is a way of architecting your code as a series of layers of meaning. It's a common way of organizing your code and structuring your application.
Why is functional programming gaining traction? Why now?
The biggest companies in the world are investing heavily in functional programming. From Facebook building React and Reason, to Apple pivoting to Swift, to Google developing MapReduce, functional programming is gaining traction. But why? I go over four hypotheses and evaluate them.
Some thoughts on map, filter, and reduce
Are map, filter, and reduce popular for a reason? Do these things capture some essence of iteration? Are they just better for loops?
What do functional programmers think of the class inheritance hierarchy?
When a functional programmers looks at the typical OOP examples that show the inheritance hierarchy, they see something weird: why is one possible field plucked out to become the class? And why make it static?
Why do functional programmers focus on time?
It turns out that in distributed and parallel systems, time plays a huge role. I think that's why fp is booming these days: all web sites are distributed systems. And web developers are facing all the irreducible problems of distributed systems, and they're turning to fp for answers.
What is "to reify" in software?
"To reify" means "to make real". It's an old concept from philosophy. When you name a concept, you can start talking about it. We do something similar in programming. When you take a concept and make it first class, you can begin to manipulate it with the normal programming constructs.
Why do functional programmers model things as data?
Functional programmers tend to prefer pure, immutable data to represent their domain. We talk about many of the benefits of such an approach. But we focus on one in particular: that good data representations can reduce complexity by reducing the number of if statements you need.
Sources of complexity in software
There are two sources of complexity in software: the complexity inherent in the domain (essential complexity) and the complexity we add as programmers due to the platform or due to bad programming practices (accidental complexity).
How do we represent relationships in functional programming?
Functional programmers tend to model important relationships using data, while OO programmers tend to represent them with references.
Single Responsibility Principle for Functional Programming
How do functional programmers use the Single Responsibility Principle?
How is a book a monad?
This ain't no "a monad is a burrito" talk. This is a real-world monad, found in the wild. We look at how reading a book is monadic. It's what lets you see a list of lines of words as a single stream of words.
Layered design in functional programming
Functional programmers often talk about creating layered designs, where each layer represents a coherent level of abstraction. It's a way to organize your code.
Keeping functional code organized
People ask me how to keep functional code organized. It's a good question but my answer is so simple, it feels kind of silly saying it: I keep everything in one file, and split it up as it grows.
What is software design?
Does the term “design” make sense in the context of code? I discuss Sandi Metz’s definition from her book Practical Object Oriented Design in Ruby.
How to create a habit of reuse
In OO languages like Java, people tend to make new classes more than they reuse existing ones. In Clojure and other FP languages, we tend more on the side of reuse. How do we develop that habit?
The easiest way to make your existing code more functional
What is the easiest way to make an existing functions more functional? It’s quite a simple technique and you can apply it today.
How does FP achieve reuse?
Functional programming gets its reuse by creating small components that are very well-defined. Small means they’re probably generally useful (like lists). Well-defined means they are easy to build on top of.
Why are actions hard to test by definition?
Functional programming divides the world into actions, calculations, and data. Actions are hard to test by definition, and we explore why.
How do things compose across domains?
A few episodes ago I talked about how things compose. But I didn’t get into how things compose across domains. For instance, how do actions compose with calculations? Let’s get into that, and what it reveals about our work as functional programmers.
Is functional programming declarative?
People often say that functional programming is a "declarative paradigm". I push back against that categorization. I simply think the word is mostly meaningless.
How can you work with a JSON value if you know nothing about it?
I have talked about the difficulty of typing certain JSON values coming from some APIs. The JSON is just very complicated. When I do that, I often get this question "how can you work with a JSON value if you know nothing about it?" The question is rhetorical. Of course you can't do anything if you know nothing about it. But we do know a ton! We just can't (or it's very difficult to) encode what we know as a type.
Is The Little Typer the static typing book I've been waiting for?
Dan Friedman's The Little Typer is coming out in September. I'm very excited about this book. It's about dependent types, and it claims to "demonstrate the most beautiful aspects". I can't wait!
Something I missed in Rich Hickey's last keynote (Clojure/conj 2017)
I wrote my interpretation of Rich Hickey's keynote. I called it "Clojure vs the Static Typing World". However, I missed something very big. I missed that he talked about how common it was to have lots of sub-solutions and partial data. I expand on that idea here.
Are categories Design Patterns?
People often ask 'what are the design patterns of functional programming?' A common answer is that categories from category theory, like monads and functors, are the design patterns. But is that true? I explore the consequences of that answer.
Why is making something first-class the key to expressivity?
People often say that functional programming is more expressive. But how does FP achieve that? The key is by making things first-class.
How can pure functions represent state change?
Pure functions have no effect besides returning an immutable value. If that’s true, then how can we use them to represent changing state?
What is callback hell?
How is a cook like functional programming?
We look at a metaphor to explain the three domains in functional programming: actions, calculations, and data. I hope the metaphor makes some things clearer and can help people explain fp better to others.
What is the primary superpower of functional programmers?
Is there something that functional programmers do that, without that, you couldn't really call them a functional programmer? Is there a power that all other powers are dependent on?
Does functional programming have an answer for everything?
Some might say that functional programming has the answers to solve the problems of concurrent, parallel, and distributed systems. But is that true? We explore what FP has to offer.
What does it mean to compose in functional programming?
We talk a lot about composition, but what does it mean? Each domain (action, calculation, data) has its own way to compose. In this quick episode, I explain each one.
Reduce complexity at every step
Postel's Law states that a program should be liberal in what it accepts and strict in what it sends. What does this have to do with functional programming? How can this help us reduce complexity?
Why is Functional Programming more expressive?
I explore the idea that Functional Programming is more expressive than Procedural Programming (and why it is) using a metaphor of a pizza master.
What is Immutability?
Functional Programmers will talk about immutable values. What do they mean? How can you write software where none of the values change?
What does it mean for programs to be built using "whole values"?
John Hughes, FP researcher extraordinaire, says Whole Values is one of the principles of Functional Programming. But what does he mean? We explore this important concept.
How is Functional Programming like grocery shopping?
Our understanding of the real world has to be applicable to the software world. Our programming paradigms need to correspond to our intuitions of the real world.
Divide and conquer algorithms
The divide and conquer pattern is a widely used functional programming pattern. It lets you turn potentially hard problems into trivial problems and then recombine the answers.
The #3 most important idea in Computer Science
Well, this one is probably the most controversial. But I think the #3 most important idea has something to do with the problem of exponential growth of the number of possible states we can represent as we add bits to our system.
The #2 most important idea in Computer Science
In my opinion, the #2 most important idea is something that came directly out of computing. But I’m not so sure. Do you know? Let me know, too!
What is the business value of Clojure?
Is there some reason a business would choose Clojure over other languages? Let’s find out!
The #1 Most Important idea in Computer Science
The idea of the Universal Turing Machine is incredibly important. But does your language support both properties?
Is Smalltalk a Functional language? Is Lisp Object-Oriented?
Alan Kay says there are only 2 Object Oriented languages that he knows of: Smalltalk and Lisp. The deeper I go into the history of Smalltalk, the more functional Smalltalk looks.
Why do we need a Theory of Functional Programming?
Though I have gotten good reception for the theory in general, a few people have asked me why we need a theory. More people have told me I’m complicating Functional Programming, which should be a simple idea.
My big beef with refactoring
I love refactoring. It’s therapeutic. It helps productivity. But refactoring is not enough to write good software. We can’t just write it so it works then clean it up. In this episode, I explain why.
Build your Core Abstraction
With limited development time, where should you focus your efforts? You should build something timeless at the center of you application to create a strong foundation to build on top of. I call that you Core Abstraction.
Focus on composition first
Where should we start when we are designing out data structures—especially the data that we expect to last a long time. The answer is in the composition operations.
Build an interface around data
Clojure programmers often complain about data structures getting unwieldy and hard to understand. How can we prevent this?
Focus on the data first
What should we design first to make sure our software will last without having to constantly rework our code? We should focus on the data first because it is the most timeless.
How variants can reduce complexity
If we don’t limit it, complexity will get out of hand. One way to limit complexity is by collapsing the number of possible states down to a few known states that we know how to handle.
Why are corner cases the devil? 😈
Corner cases make for complex code. They multiply with each other. And as they multiply, they reduce the effectiveness of each new line of code.
What will increase your programming productivity the most?
Lisps have traditionally been highly interactive. This allowed AI researchers and language developers to iterate quickly and learn about what works and what doesn’t. How can you tap into this in your workflow?
A cool Functional Programming pattern. Do you know what to call it?
I use this pattern all the time when I'm programming, and I don't know if it has a name. It involves lifting a value into a new space, solving a problem with it, then lowering it back down.
Can you do Functional Programming in any language?
Can you do Functional Programming in any language? More importantly, can you learn FP in any language? What does it mean to call a language "functional"?
What does it mean for Actions to be first-class?
In Functional Programming, everything needs to be first class? But what does that mean? And why is it important? I discuss the idea of composing Actions and Calculations dynamically.
Should we waste memory?
We have so much memory now, compared to the 1970s, that it often seems like we have memory to burn. I misspoke in a previous episode where I made it seem like I’m in favor of wasting memory. But what did I mean instead?
Is FP just programming with pure functions?
As I develop and expound this theory, it may seem to be too complicated. Isn’t functional programming just programming with pure functions? Why make this more complicated than that? We talk about my reasons and my goals for the theory.
The magical leverage of languages
If I write a straightforward solution to a problem in Clojure, it might take me a thousand lines of code to solve it. To handle all the corner cases and everything, I got a thousand lines of code. However, if I take this other approach where it's much more indirect, or instead of solving the problem that I have in front of me, I write a language—a DSL. The DSL could take me 500 lines of code to write. That's a fairly large DSL. Usually they're much smaller, but it takes me 500 lines of code. Actually, writing the solution in it only takes 10 lines of code.
Algebraic Properties and Composition
In school, we learn about a few algebraic properties. These apply very well in functional programming because their expression is so simple, their definitions are so simple and they really focus on how things compose.
Bottom up vs Top Down Programming
I think the real trick to all of that is always think about the data as being forever. One thing that we often do when we're programming is we want things to be a little bit more concrete.
What a Clojure Web Framework might look like
One of the questions that came up a couple times was what is a Web framework? What does that even mean? I don't want to get into philosophical discussion about what a framework is. Is it a library or framework or any of those kinds of questions. I'll tell you what I meant, what I still mean by this assertion that we need one.
A Theory of Functional Programming 0006
What I want to talk about is this issue of what is an action and what is a calculation in terms of timeliness, because we know that deep down in the computer, everything is an action. Every operation depends on what is that particular locations in memory at the time that the operation is run.
What Clojure needs to grow — a boring web framework and boring data science
I think a lot about what Clojure needs. Is there something that is sort of the bottleneck for growth? People talk about different things as their hypotheses for what would make it grow. I think that what Clojure needs — I'll talk about my hypotheses — I think what we really need is to solve all of the boring problems that other languages have already solved.
Programming is a pop culture and what we should do about it
I want to talk about how programming is a pop culture. It's true, programming is a pop culture. There are big trends, fads, new frameworks coming out all the time. It's all about attention and getting mind share and people watching your media about what framework or what language to use, or how to program.
A Theory of Functional Programming 0005
There are different patterns that we use as functional programmers to reduce the possible states so that it becomes easier to reason about. I think that this is something that we should talk about a little bit more, because it's actually something that isn't talked about much in imperative programming.
A Theory of Functional Programming 0004
Today, we're going to be talking actions. Now, as counter-intuitive as it may be, functional programming has more to say about actions than it does about data and calculations. At least, more interesting stuff to say.
A Theory of Functional Programming 0003
The data tradition goes back to the early days of writing, and functional programming largely tries to learn lessons from those instead of doing what object-oriented programming tries to do, which is attach code to the data, so the data is inert. It just is what it is.
A Theory of Functional Programming 0002
In a general sense, every operation depends on when it is run and how many times it is run. Using our language or some other discipline, we can say, "Well, that memory or the register, it's kind of special. We're not going to store anything important in there so that at any point we can overwrite it."
A Theory of Functional Programming 0001
Functional programming is a paradigm. Meaning, it is a set of ideas, it's a set of concepts, a set of practices and almost like a theory of programming itself. It is a framework, meaning a mental framework for how to approach a problem that you're trying to solve with software.