Make it work, make it right, make it fast

I recently, upon reading something somewhere that irked me, tweeted the old adage

1. Make it work 2. Make it right 3. Make it fast. So many projects start at 1 and skip to 3, or worse, start at 3… :-(

In reply, Conal Elliott tweeted

What does it mean to work but not be right?

which is a great question. If a program can be said to be “working”, what can that possibly mean except that it is in some sense a “right” program?1

That’s a pretty decent point. Software is a tool for getting stuff done, after all.

However, I’m not quite ready to abandon my idealism about these things, so let me have a stab at expressing myself more clearly: a “right” program follows the design implied by its own implementation. Implementing programs can lead to improved designs through deeper understanding of the underlying domains.

Now, programs can “work” just fine and yet not follow (or not expose) their own internal logic. It’s when programs are composed that their little warts and kinks start to add up quickly into serious distortions. Let’s revisit the adage:

  • Step 1: make it work;
  • Step 2: make it right;
  • Step 3: make it fast.

I have in my mind the image of two mirrors roughly facing each other (step 1): improving their alignment even slightly (step 2) can cause the tail of reflected images to grow asymptotically longer. Polishing the mirrors (step 3) helps sometimes, but only if they’re in good-enough alignment to begin with.

I don’t care about step 2 for every piece of software. For example, most embedded systems software (microwave controllers, digital watches) is so throwaway and terminal that polishing it beyond a certain point isn’t worth it.2

Foundational software, though, is where a “make it right” step becomes crucial. Software upon which other software is constructed3 has to be made right because of the costs of correcting for design flaws downstream. The total effort involved in working around kinks and warts in a foundational artifact often exceeds the effort required to fix the foundations.4

There are countless examples where an almost-right product has gained wide influence without being “made right”: Unix, whose distortions have led to X-windows and monolithic walled-garden applications; Excel, whose distortions particularly with respect to naming and scope have led to horrible kludges and workarounds in any application beyond the trivially simple; AMQP, whose original model was simple and promising, but which is now moving in the direction of the irredeemably complicated; and on, and on.

In each case, lots (and lots) of effort has been spent on optimizing the current artifact, and because very little effort has been spent on learning more about the domain and revisiting the artifact’s design decisions, even more effort has been spent downstream on workarounds for each system’s design flaws.

Lots of these cases have been successful because the software involved has happened to line up well enough with the underlying domain that new kinds of useful work can be done: getting the foundations closer to being “right” has opened up previously-unimagined spaces for new applications. Getting the foundations “right” really does increase the power and the reach of a system.

  1. Er, on second read, perhaps Conal was being ironic, and saying that a non-right program in the sense of this post cannot be said to work at all! After all, without semantics, programs are meaningless… 

  2. Although even here there are lessons to be learned. An embedded system’s distortion with respect to its own internal logic, if examined carefully, can lead to less-distorted future iterations within a product line, perhaps, and the lessons may even spill over in the course of an engineer’s career into other embedded systems. 

  3. For instance, operating systems, language virtual machines, language designs themselves, important libraries, and networking protocols, etc. 

  4. Which makes it especially important to get network protocol designs right, since flaws and felicities are magnified so strongly by the network effects involved. 

Comments (closed)
Conal Elliott 07:20, 27 Nov 2010

By my question, I really meant something like your "ironic" interpretation (though no irony intended): In what sense can an incorrect (non-right) program "work"? Other responses (by kaleidic and I think by justinsheehy) suggested an answer: symptoms of the program's incorrectness haven't yet been noticed, which I might call "under-tested" rather than "works".

I think of "right" or "correct" as meaning that a program as a clear specification and meets it. And so an incorrect program fails to meet its specification. In which case, I'd say the program "doesn't work", even if no one has yet noticed. (Otherwise, does the program stop working when or because someone noticed, even though it's exactly as it was before anyone noticed?)

I haven't much of a clue what you mean by your remark:

> a "right" program follows the design implied by its own implementation.

From your "old adage" link, I gather that some people use "work" to mean "work somewhat" and "right" to mean "work better", which I wouldn't have guessed. Strange in my ears, since I understand "right" as an absolute and their use of the word as relative. What would they say for "work better yet"? "Righter"?

Tony Garnock-Jones 15:14, 27 Nov 2010 (in reply to this comment)

"Righter" might do in a pinch! :-)

By "work", I mean "does something useful for its user". Even a program that doesn't meet its specification can be useful to a point. The only value in having a specification, clear or not, met or not, is if the end result satisfies some user's need. So then your "work better" could actually fit kind of well with what what I intended by "right": the program becomes even more useful for its user, because it is better aligned with whatever platonic form (fuzzy handwave alert) lies underneath it, and so develops (possibly a lot) more power.

As an example, consider the specification "write a program that returns true if all the numbers in a list are positive". You could write two plausible variants, both following the specification:

(define (x1 ns)
(cond
((null? ns) false)
((null? (rest ns)) (positive? (first ns)))
(else (and (positive? (first ns)) (x1 (rest ns))))))

(define (x2 ns)
(cond
((null? ns) true)
(else (and (positive? (first ns)) (x2 (rest ns))))))

Both programs produce the required result, but they differ in their behaviour when given the empty list. The first (following Aristotle, apparently?) answers the question "are all the numbers in this empty list positive?" with "no, there being no numbers". The second answers the question with "yes, every one of the zero elements is positive". I would say the second is the righter of the two: the code is simpler, and it has better compositional properties (e.g. when given an empty list). The former will work well for all but the empty list, and so will force users to work around its defect by special-casing the empty list instead of simply passing it in.

The specification given for this small example could obviously be repaired by specifying the behaviour for an empty list one way or the other. But my point is that the form of the program itself, and the forms of its client programs, can give strong clues as to which way the specification should be altered, or absent a change to spec, the goal for subsequent changes to the program.

It can be very hard to see the existence of (x2) when all you have in front of you is a perhaps-large ecosystem centred around (x1). But I claim that it's always worthwhile to try.