Clojure Gazette 183: The Magic of Abstraction
From OO to Clojure Workshop!
Watch my free workshop to help you learn Clojure faster and shift your paradigm to functional.
The Magic of Abstraction
Issue 183 - July 25, 2016
We can solve any problem by introducing an extra level of indirection.
My strata is very reliable because I've designed it to solve exactly the problems I see. For instance, my functions can check for nulls. The layer above it now doesn't have to be concerned with null checks. It's also shorter, easier to read, and has fewer bugs. As I debug my program, I strengthen that layer. For instance, if I do find a null pointer exception, I add a new check to a function in that layer. Over time, it becomes a more robust level of indirection. And I can imagine a time where I have solved the null pointer problem in its entirety in my code.
This idea is remarkable. How can I believe that I have solved Tony Hoare's Billion Dollar Mistake ? Is it tremendous hubris on my part? I don't think so. I think I'm just taking advantage of the fundamental theorem of software engineering . And I think it's the second most important idea to come out of computer science. (I've talked about what I think is the most important idea in computer science before.)
Let's take a look at some code from Clojure's core. If you write
Clojure, you're familiar with the
seq abstraction, which is a function
that converts things into
Seqs. It's instructive to look at the
At first glance, these functions looks awful to me. It's just a big case statement that switches on the type of the argument. But it's a perfect example: it's an abstraction written in lower level constructs that builds the unifying concept of the language . It makes iteration reliable in Clojure where it is not in Java. It's ugly code that makes a beautiful system.
I'm not a civil or mechanical engineer. Nor am I an architect. And I would love to be shown I am wrong about this, because I don't have much proof of this. But with that preface in place, I believe that Computer Science produced the notion that a layer of abstraction can make a system more reliable. Engineering in general relies on building reliable things out of reliable parts. For instance, making a reliable building requires reliable bricks laid in the right way. To scale the building, the bricks and laying need to made to even greater tolerances.
Perhaps a better example is the pre-internet telephone network. At the time the ARPANET was being built, the telephone network was expanding and scaling by increasing the reliability of each part. To scale up the distances and reduce the number of dropped calls, they were investigating better materials to make each part transmit data with higher fidelity. It was working. But it cost more as the network scaled up and progress was slowing down.
The ARPANET was different. It assumed that the physical network was unreliable. With that premise, how would one ensure reliable communication? A few levels of indirection later (known as the TCP/IP stack) and you have a totally scalable and reliable network. Packets are lost, lines are cut, routers fail, hard drives crash, and yet you can still load the web page. The Internet Protocol is like an inflection point that transforms bad physical networks into good communication networks, eliminating whole classes of problems.
We use this idea every day when we program. Our modern computers have a machine code that they can run. Hand-written machine code does not scale. We are usually programming in a stack of abstraction layers above it that help us write more complex programs than would be possible in machine code. There's an optimism embedded in this idea: no matter how bad your stack is, you can abstract away the problems with suitably designed levels above it.
Does this "process" of assuming the worst generalize? Erlang seems to think so . Assume messages aren't delivered reliably (even on a single machine). Assume messages don't follow the correct format. Assume that bad data will be passed around. Assume there will be bugs in your program. With all of these premises, how should one program? Build layers of abstractions, where the reliability comes at the top of the stack. Just as Lisp embodies the most important idea in Computer Science, Erlang embodies the second most important idea.
So this is my vote for the number two idea. It's the power of abstraction to turn crap into gold. It seems to me that we should be cataloguing known limitations of systems (the halting problem, the CAP theorem, etc.) and known layers of abstraction to mitigate them. These are the "real design patterns" that could help us build better systems. What are these limitations? Are people already doing this? What would the abstraction levels look like? And how can we use pessimistic assumptions about the lower-level st ack to eliminate them at a higher level?
I'm looking forward to hearing your ideas.