The power of runnable specifications

I talk about the advantages of writing a spec directly in your production language.

Transcript

[00:00:00] The power of runnable specifications. Hello. My name is Eric Normand. And this is my podcast. Welcome.

[00:00:13] Today, I want to talk about an idea called runnable specifications. It's an idea that I think the name comes from Alan. Kay. That's where I got it from. I'm not sure if he invented it. I've seen other inklings of it around. But that's why I'll give credit to.

[00:00:34] So what is a runnable specification? It's when you write down, in a high level language, a description of your solution to a problem in a language that you can then run. So you don't have to translate it, say into some lower level representation. [00:01:00] You can just say directly what you mean.

[00:01:05] Okay. That is a runnable specification. It's called a specification because it's written directly in the language you would use to specify the solution. And it's runnable because it's a programming language and you can run it. So it's a program, but it doesn't mean that that program is ready for production. It's still a specification. Hopefully it is, but maybe it's not.

[00:01:34] So we're going to talk about what are the benefits of making a runnable specification. But I want to go back a little bit and contrast it with a different approach. And so we can see, like what, what's the difference?

[00:01:54] All right. So there's this great book called How to Engineer Software: A Model Based Approach.[00:02:00] It's this very detailed method for engineering software using a model based approach. It's by Steve Tockey. When he talks a model based approach, what he's saying is: We gather requirements and write them down-- basically UML diagrams.

[00:02:23] Why diagrams? Well, in his experience, if you use prose-- so just text you, just write everything out in English-- then you have this book sized spec. And no one is going to read it. I mean, maybe an engineer is going to read it as they're implementing it. But if you hand it to a business person and say, is this right? Is this what you really want? They're not going to read it.

[00:02:54] Likewise, if you just start writing it in code, say in C++ or in [00:03:00] Java, they're not going to read that either. There it's too verbose. There's too much going on. It's too hard to translate between the code and how it runs on the machine and the ideas that they want to capture. But he has found that people will look at diagrams. Because the diagram captures a lot more information in one page. And it's something that with a little training you can learn to read. It requires a little training because like different arrows. You know, UML relies a lot on different styles of arrows, meaning different things. But once you learn those things, a business person can sit down, look at it and say, oh, you're missing a relationship. Or this requirement is maybe too constrained. It needs to be more free. Right? So you can actually get useful [00:04:00] feedback. A useful conversation with the business person, your client.

[00:04:07] Okay. So in, in his process that he lays out in this book. They go through a requirements gathering phase. It doesn't mean they get all the requirements at front before they start coding, but they get the necessary requirements for the next iteration of coding and do a lot of conversing with the stakeholders, build this complete model of it in UML, and then translate it into C++ or Java or whatever the job requires. Okay. So they don't really get to see it run until the end. Then, also, there's a disconnect between the specification [00:05:00] in UML and the program written in Java. If you write the job and you say, oh, it doesn't work quite right, now you have to go back to your spec and make a change. And it's not always clear where that change should be, because you've, you've translated it. And it's not a one-to-one mapping. There's a problem.

[00:05:26] With runnable specifications you avoid this problem. You are writing in a high level language, sometimes a language that you have custom built just for this problem, but often not. Maybe it's more like a language in the sense used in structured interpretation of computer programs, where even a set of functions that work well together, it would be called a language.

[00:05:54] You build a model of the problem in your language. And then you [00:06:00] write the solution to that problem in that model. And now you can run it. So you can see it work.

[00:06:17] Now, what I said before is it's a program. It might not be production ready. It might not be optimized. It might not have all of the operational requirements necessary. For instance: It might run in memory and not have any persistence to a database, right. That, that could be part of the trade-off that you're making. You could persist it to the database if you wanted to, but for expediency, you might do it faster in memory. Not have to deal with the database.

[00:06:52] And so this method gives you all of the benefits of UML except potentially [00:07:00] that now you're writing code. And so in Tockey's experience, business people, your clients, the stakeholders don't want to read code. However, they might read really concise code, written in a high level language. And there's a lot of evidence of this. In the Ruby community, where they talk about, sitting down with stakeholders, and if the DSL was right, that it was readable by the stakeholders. So they could even write unit tests for the requirements in this DSL, and the stakeholders could comment-- understand it and comment on the unit test, whether they actually captured what they were into.

[00:07:58] So this is the kind [00:08:00] of approach that I want to take in my book, where we're trying to make the runnable specification instead of building UML diagrams. I think UML is a neat approach. But I just feel the experience of being able to run it as quickly as possible is a better way of getting good feedback from the machine,

[00:08:31] from from your own intuition, and from the stakeholders. They can see it run. There's no need to imagine what it will do. You can see it. Right. So a stakeholder can say, I don't know if that's right. Let's build a small sample application. Let's walk through a use case. How would you encode it? And let's see what the result will be. [00:09:00]

[00:09:00] And you can do that right in front of their eyes because you're writing directly in the DSL that, that captures the model.

[00:09:12] All right. So I also want to talk about some of the things you can do with a runnable specification. One thing is it might be good enough for production. It might be. It's very possible that it be at the scale you, that you're currently at, that this runnable specification will be fast enough.

[00:09:36] The next thing is that you can start running tests against it. Does it have the properties that I think it has? You've got a working system. You can run it in different scenarios and see if it does what you expect. You can do example based tests. You can also do property-based tests. Does [00:10:00] the whole set of operations maintain a certain property?

[00:10:05] So one example is if you have a, like a user login system, and when a user registers, you know, often you want to confirm their email address. And so you have this flow of, you know, entering your email. Okay. We created your account and now go check your email, click on the link. And that will confirm that you really are the owner of that email address.

[00:10:33] You build that into your software. And, later, you get another requirement: oh, users want to be able to change their email address. So you put in a thing where you have a form and you enter your new email and you hit enter and it changes your email address. No, the problem is that email is not validated. Okay. See you violated [00:11:00] one of the maybe unspoken assumptions that all email addresses in the system are validated, right.

[00:11:08] You could actually write a property-based test that checked that assumption. They asserted that it's true: that all email addresses in the system either need to be validated or be in the process of validation.

[00:11:27] You can write a property-based test that randomly generates sequences of actions on the account and see if you can get into a state where they have an unvalidated email.

[00:11:41] It's something that you can do on the specification and learn ahead of time, like, oh, this specification is not giving us the properties we want before you write the code for it.

[00:11:57] Okay. Another thing you can do [00:12:00] with the runnable specification is you can refactor from it to a production ready system. It's a working system with the correct behavior. And so you can apply refactorings to turn it into something that can be fast enough or have the operational requirements like using a database. You can convert it into something that has all those other requirements. Just by refactoring. Cause you're just changing the "structure" of it, as they say in the refactoring world.

[00:12:41] Okay, finally, yes, it's probably a little slower than it needs to be, but that's fine for testing. So you can actually test it against the real production system.

[00:12:53] You have a thing that this is what we want it to do. We've refactored it into we've kept the [00:13:00] original, but we've made a new thing that's supposed to have the same behavior from it. And now we can compare and see that they always do the same thing.

[00:13:11] Right. So it gives you this ability that you have this "perfect" version. "Perfect" in quotes, but you have a working version that's much smaller, doesn't have all the optimizations that make it unclear what it does. It's not spread out all over. And in the way you do when you refactor like, oh, part of my logic now has to be in the database and part of it has to be in this query and part of it's over here.

[00:13:39] And so now you can compare the two. It's better than a unit test. Right because you, you, you know what you want your system to do.

[00:13:53] And that's in addition to the idea of being able to run it right away. And check out your intuition [00:14:00] and see if it works the way you expect.

[00:14:06] Okay. That's all I wanted to say about this. Let me know what you think. I'm very interested in practical stories, like real world anecdotes about using this, whether there were problems with it, or whether you like this kind of thing. Thank you very much.

[00:14:26] My name is Eric Normand. This has been another episode of my podcast. Thank you for listening and as always. Rock on.