transparent like frosted glass
A good interface is usually where our explanation ends, but always where it begins.
Sherry Turkle wrote her study of the culture of computing, The Second Self, "on an Apple II computer that had, quite literally, been torn bare."1 Its circuitry had been exposed, and its operating system replaced. Even her word processor felt close to the machine; it required her to use "a formal language of nested delimiters" that could be easily "translated into electrical impulses."
This experience helped her to understand "the aesthetic of technological transparency" in early personal computing. The purpose of transparency, she said, was to give enthusiasts "the pleasure of understanding a complex system down to its simplest level."
But this understanding couldn't last. With each year, the hardware and software grew more complex. No one could expect the average user — who had, just recently, bought their first computer — to hold the entire system in their head.
And so, the meaning of transparency changed. Newer machines, like the Macintosh, encouraged users to "take the machine at (inter)face value." Deep understanding was neither required nor rewarded. "By the mid-1990s," Turkle says, "when people said that something was transparent, they meant that they could immediately make it work, not that they knew how it worked."
This contranymic transparency was especially popular in the distributed systems literature. In the 1990s, many researchers believed that the network ought to be abstracted away entirely. As Wolfgang Emmerich explained it:
[T]he fact that a system is composed from distributed components should be hidden from users; it has to be transparent.2
Here, transparency means no outward difference between local and remote resources. Any method invocation might fire off a network request. This, Emmerich asserted, would prevent the "average application engineer" from being "slowed down by the complexity introduced through distribution."
This simplicity, however, is fragile. Method invocation has no explanatory power for latency or DNS issues. When things go wrong, the "average application engineer" will understand less than nothing.
A good interface is a locus. It is usually where our explanation ends, but always where it begins. It is a stepping stone, not a terminus.
And so, an interface should reveal the shape of the underlying implementation. It should only obscure the finer details that are, to most people, irrelevant. It should be transparent like frosted glass.
This is easier than it may seem. Imagine you're being onboarded. They've drawn lines and boxes on the whiteboard, to illustrate the overall system. One box, far removed from your own project, is labelled "auth service."
In that moment, the name suffices. You know that there is, somewhere, a service responsible for authentication and authorization. And if you ever need to know more, the name gives you a broad sense of what to expect.
Some people, of course, will always look past the name. And this is fine. As we saw with the fractal-metaphor, we don't need to bisect our software with a single, perfect interface. Instead, we can split it into layers, each revealing incrementally more detail.
And so, when we look past the auth service, we will find a small number of named components. Those components, in turn, can be decomposed into named classes, methods, and values. Each decomposition, however, should be less likely than the last. In a well-designed system, the names usually suffice.
-
Turkle 2005, p. 7 ↩
-
Emmerich 2000, p. 19 ↩
I wonder if there is always a Russian Doll feeling (assuming we can get them made from frosted glass!) to software. I often find that, despite all of the best intentions, these neat modules are almost always wrong. As we the system grows our original insights seem more and more incorrect. And yet, especially as the classes and tests abound, we are more and more reluctant to change. Maybe discussions on this are coming later but you asked for commentary as I go along so here we go.
Hi Ray, thanks for the comment, keep them coming. I think that layered abstractions are a necessary part of writing software at scale. The idea behind the "frosted glass" metaphor is that it should be easy to detect mismatches between adjacent layers. This, I think, mitigates a lot of the (very real) problems you've described; if we understand the essence of an abstraction, we feel more confident in using it and in discarding it when it's no longer useful.
I don't have any other material that specifically discusses this, other than the earlier simplicity of a fractal post. It is, however, touched on throughout the manuscript. I'll give some thought as to whether there's more to say on the subject.