transparent like frosted glass
Sherry Turkle wrote her study of the culture of computing, The Second Self, "on an Apple II computer that had, quite literally, been torn bare."1 Its circuitry had been exposed, and its operating system replaced. Even her word processor felt close to the machine; it required her to use "a formal language of nested delimiters" that could be easily "translated into electrical impulses."
This experience helped her to understand "the aesthetic of technological transparency" in early personal computing. The purpose of transparency, she said, was to give enthusiasts "the pleasure of understanding a complex system down to its simplest level."
But this understanding couldn't last. With each year, the hardware and software grew more complex. No one could expect the average user — who had, just recently, bought their first computer — to hold the entire system in their head.
And so, the meaning of transparency changed. Newer machines, like the Macintosh, encouraged users to "take the machine at (inter)face value." Deep understanding was neither required nor rewarded. "By the mid-1990s," Turkle says, "when people said that something was transparent, they meant that they could immediately make it work, not that they knew how it worked."
This contranymic transparency was especially popular in the distributed systems literature. In the 1990s, many researchers believed that the network ought to be abstracted away entirely. As Wolfgang Emmerich explained it:
[T]he fact that a system is composed from distributed components should be hidden from users; it has to be transparent.2
Here, transparency means no outward difference between local and remote resources. Any method invocation might fire off a network request. This, Emmerich asserted, would prevent the "average application engineer" from being "slowed down by the complexity introduced through distribution."
This simplicity, however, is fragile. Method invocation has no explanatory power for latency or DNS issues. When things go wrong, the "average application engineer" will understand less than nothing.
A good interface is a locus. It is usually where our explanation ends, but always where it begins. It is a stepping stone, not a terminus.
And so, an interface should reveal the shape of the underlying implementation. It should only obscure the finer details that are, to most people, irrelevant. It should be transparent like frosted glass.
This is easier than it may seem. Imagine you're being onboarded. They've drawn lines and boxes on the whiteboard, to illustrate the overall system. One box, far removed from your own project, is labelled "auth service."
In that moment, the name suffices. You know that there is, somewhere, a service responsible for authentication and authorization. And if you ever need to know more, the name gives you a broad sense of what to expect.
Some people, of course, will always look past the name. And this is fine. As we saw with the fractal-metaphor, we don't need to bisect our software with a single, perfect interface. Instead, we can split it into layers, each revealing incrementally more detail.
And so, when we look past the auth service, we will find a small number of named components. Those components, in turn, can be decomposed into named classes, methods, and values. Each decomposition, however, should be less likely than the last. In a well-designed system, the names usually suffice.
-
Turkle 2005, p. 7 ↩
-
Emmerich 2000, p. 19 ↩
This post is an excerpt from my (incomplete) book on software design. For more about the book, see the overview.