Jon Smock Clojure/conj 2016 Interview

Jonathan Smock will be giving a talk at Clojure/conj 2016. He will be talking about SAT solvers.

Follow him on GitHub and Twitter. How did you get into functional programming?

Jonathan Smock: The earliest thing I can think of that pushed me in that direction was learning to TDD. When you push that hard on testing, moving to a functional style removes a lot of pain. One of the rules of TDD is to listen to the pain. (Regardless of your feelings on TDD, it's worth trying it strictly for at least a short time as a change of style. I certainly learned a lot and not just about testing.)

I also have to give credit to Gary Bernhardt with his (original) Destroy All Software series. Amidst other topics, Gary shows some of his own inner struggles with OO and FP and the interaction between them. I feel fortunate that I subscribed at that time to take that journey with him. (Check out "Functional Core, Imperative Shell" if you haven't. I think you can buy the season; it's worth it.)

And it all came to a head, when I was given a chance to learn Clojure at work. I was working as a Ruby programmer at the time, and Ruby has some nice functional aspects. But, it felt really good to dive into a lisp full-time. What is your talk about?

JS: The talk walks through a handful of algorithms employed by SAT solvers. The SAT problem itself is simple enough to explain in under a minute but meaty enough that your brain starts racing with creative ideas on how to crack it. On the flip side, I completely ignore the actual performance engineering of how to make these things fast. That involves paying attention to things like cache coherence, of which I know nothing about. (Sorry mechanical sympathizers!)

But in a broader sense, it's a case study in declarative programming. This talk is the result of me drilling down from declarative programming to constraint solvers to SMT solvers to SAT solvers, until I finally bottomed out on something I could somewhat wrap my arms around. And while this is an algorithms talk, I hope it shows off some of the flexibility constraints can buy you. Like anything, working in constraints is a trade-off, but architecturally (and mentally) they are definitely a simplification. What do you hope people will take away from the talk?

JS: There were many small, clever tricks I learned while studying SAT solvers, and I'm hoping to impart some of that learning joy. Concretely, I hope people are motivated to pick up core.logic again or try out clara-rules. What concepts do you recommend people be familiar with to maximize their experience with the talk?

JS: While it's not a prerequisite, I highly recommend playing with core.logic, if you haven't. It will give you a more visceral understanding of what's going on as we walk through each algorithm. Also, core.logic is just awesome. Do it. (If you need a problem to solve, try sudoku or the bowling kata or something.) What resources are available for people who want to study up before the talk?

JS: If you don't mind spoilers (or you're the type of person who likes two different takes on the same material), check out the Wikipedia page for conflict-driven clause learning. If you want to do a little more work, check out anything about microkanren. Unification to me seems very much like SAT's DPLL algorithm. If you're a complete overachiever, check out this fantastic overview of SAT solving algorithms. Where can people follow you online?

JS: I'm jonsmock on Twitter, GitHub, and Gmail. Hit me up! Are there any projects you'd like people to be aware of? How can people help out?

JS: Not really, so I'll take this opportunity to plug an interview and two other talks.

First, you must check out Will Byrd's interview about logic programming from a couple years ago. It's fantastic. And short - watch it twice. Second, check out John Regehr's talk on Souper, the LLVM super-compiler. Following work like Souper is what got me interested in SMT solvers and started this whole exploration of mine. Finally, Peter Alvaro has a great talk at Strange Loop that touches on a lot of cool logic stuff in the context of abstraction, programming languages, and distributed systems. The whole talk is great, but I specifically enjoy the explanation of datalog starting at 19:00 and his point about self-reference and negation at 25:35. Where do you see the state of functional programming in 10 years?

JS: Wow, life comes at you fast. I have no idea. I sure hope the robots haven't taken over by then ;-)

It feels like the great ideas of FP are spreading. Certainly we've been harvesting ideas from Haskell and Erlang for a long time, and many new languages have been influenced by Clojure's taste and data structures.

React and things like Om are exciting, and of course, there's Datomic in the database space. My hope is that the ideas behind those tools continue to gain traction in those worlds. I also hope we can reach into the games and low-level systems spaces. Games programming, in particular, seems ripe for FP and logic programming. If functional programming were a superhero, what superpower would it have?

JS: Faster than an infinite bullet stream! More powerful than a type theory abstraction! Able to leap tall mountains of complexity in a single bound!

Ok ok, just one. How about: the power to remember the past!