making things better
Previously, we explored how abstract explanations, paired with intent, become specific. And in our case, the intent is almost always to improve our software. But what does this actually mean?
To begin, let's consider this metaphor:
Things are looking up
This is a statement of optimism: things are improving, and we expect this trend to continue. There are, however, a wide range of up-metaphors. And, as shown by George Lakoff and Mark Johnson in their book Metaphors We Live By, these metaphors align. Combined, they create a specific vision of how things will improve:
Happy is up; sad is down:
You're in high spirits. He's really low these days.
More is up; less is down:
My income rose last year. He is underage.
Having control is up; being subject to control is down:
I am on top of the situation. He fell from power.
Good is up; bad is down:
He does high-quality work. Things are at an all-time low.
Foreseeable future events are up (and ahead):
All upcoming events are listed in the paper. I'm afraid of what's up ahead of us.1
When things are looking up, then, we are looking into a future where there is more. We will be happier, we will have more control; whatever we consider to be good, there will be more of it in our life.
The up-metaphor asserts there is an alignment between everything we consider good. It rests upon what what Albert Hirschman called the synergy illusion: the belief that all good things go together.
It is of course an ancient idea, traceable in particular to the Greeks, that there is harmony among ... various desirable qualities such as the good, the beautiful, and the true. A celebrated expression of the idea is in Keat's "Ode on a Grecian Urn": "Beauty is truth, truth beauty."2
We know that this is an illusion. Tradeoffs exist; improving one aspect of a system can make other aspects worse. As projects grow, our control over them shrinks. Ugly truths abound, and beauty is a luxury we can rarely afford.
Knowing this, however, does not mean accepting it. Confronted with this dissonance, this ugliness, we inevitably gesture towards a better future. We talk about better design, better practices, better processes. We await better abstractions. We imagine a world in which we cannot help but make something beautiful.
This belief in the future, in an unending ascent towards perfection, is a belief in progress. The flaws in this belief — its internal tensions, the fact that it is closer to a theology than a theory — have been pointed out for centuries.3 It is, nevertheless, an inescapable part of the software industry. Everything we do, whether design or implementation, is oriented towards an imagined future.
Any discussion of improvement, then, should build upon these intuitions. Our metric should, wherever possible, allow for indefinite growth. Ideally, the metric should be linear; the return on our effort shouldn't have any plateaus or sudden jumps. A little more should be a little better, forever and always.
Often, there are many such metrics. Consider how we might improve a queue. We could focus on the queue's capacity; a better queue can hold more messages. We could focus on the queue's throughput; a better queue can process more messages at once. We could also focus on the queue's latency; a better queue can process a message in less time. This, however, lacks linearity; when reducing latency, there are always diminishing returns.
In fact, as Donald Knuth reminds us, attempts at optimization can make things worse:
[T]hese attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.4
We must, however, still optimize:
Yet we should not pass up our opportunities in that critical 3%.
Optimization, then, is what philosophers call a pharmakon, an Ancient Greek word that means both remedy and poison. It is only useful when paired with expertise.
Wherever you're cautioned to do something in moderation, you've found a pharmakon. Consider the "Rule of Three," as popularized by Martin Fowler:
The first time you do something, you just do it. The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway. The third time you do something similar, you refactor.5
Optimization is a pharmakon, abstraction is a pharmakon. These quotes, however, tell us little beyond that. They have the structure of advice, but omit any of the expertise we'd need to apply them. What differentiates the critical code we should optimize from everything else? Likewise, similarity is a continuum; where should we draw the line?
If our intent is a pharmakon, we need a shared understanding of when it should no longer be pursued. The easiest way to do this is to describe a different intent, and assign it higher precedence.
Let's look at a concrete example. Some years back, I was responsible for the design and implementation of a server which would authenticate and route all incoming API requests. When trying to articulate the design goals for the project, I came up with an ordered list of sub-goals:
- It should be transparent — it should be easy for us to reason about the handling of each individual request, as well as the overall state of a server process.
- It should be stable — it should be robust when receiving unexpected volumes of both normal and pathological requests.
- It should be fast — when forwarding requests to backend services, it should add minimal overhead.
- It should be extensible — it should be easy to understand and modify in ways that won't jeopardize the first three properties.
This list was not a decomposition of the project into different pieces. There wasn't one component relating to transparency, and another relating to stability. Each represented a different holistic intent, and collectively they defined what "better" meant for the project.
The primary intent was transparency. We wanted stability except where it would make things less transparent. We wanted it to be fast except where that would make things less stable or transparent, and so on.
In this sort of list, all but the first intent are assumed to be pharmakons. This is what is often missing from discussions of optimization or abstraction: the concrete, project-specific goals that should take precedence. And even the first intent, transparency, is only an absolute good in the context of the project; we must always consider if our time is better spent elsewhere.
Intent is generative. It is what takes us from an abstract metaphor, like a queue, to a concrete implementation. Creating a shared understanding of that intent, then, is one of the most important things we can do.
We must assume, however, that this intent will be followed indefinitely. If that has the potential to lead the project astray, then we must create a list. And each time we find one intent impinging upon another, our expertise will grow.
-
All quotes excerpted from Lakoff and Johnson 1980, pp. 15-16 ↩
-
Hirschman 1991, p. 151 ↩
-
For a good survey of these analyses and critiques, see Dinerstein 2006. ↩
-
Fowler 2018, p. 50 ↩
This post is an excerpt from my (incomplete) book on software design. For more about the book, see the overview.