intent and implication
There was, in the mid 2010s, a popular formula for explaining a new startup: "Uber, but for ____." This was a metaphor: the startup, despite targeting a different market, was similar to Uber. It was, however, a fairly ambiguous metaphor; there were many ways that a company could resemble Uber.
The most visible facet of Uber was their use of a mobile app to affect the physical world. This was, at the time, a novel concept. Every app promised a "magical user experience;" you could summon a car, summon a cleaning service, summon a doctor to prescribe medicinal marijuana1. There was, it seemed, no limit to what you could accomplish with the tap of a screen.
Also important, however, was Uber's use of so-called gig workers. Early press coverage of this labor model typically focused on its flexibility — they could work wherever and whenever they wanted — and glossed over the lack of benefits or guaranteed income.
For Uber to retain its magic, the car had to appear quickly. Unfortunately for the drivers, the easiest way to minimize latency is to also minimize utilization. To Uber, the passenger's time was precious and the driver's time was cheap. This was not, however, true of every startup that resembled Uber. My visit from the doctor, for instance, was scheduled several days out; the magic was that he showed up at all.
Now imagine the year is 2013, and a friend is telling you about their new startup: Uber, but for dog walkers. When interpreting this metaphor, we must consider which aspects of Uber would make this a more viable business. It's unlikely, for instance, that they are describing a roving fleet of walkers, ready to pick up your dog at a moment's notice.
Dog walking is a recurring service, built on trust. Even if the startup used gig workers, those workers would be significantly less fungible. Users would expect to recognize the person walking their dog; if a walker quit, that would weaken the user's trust. This implies a very different relationship between the startup and their labor force. It also implies that the mobile app would need to offer some sort of matchmaking service: Tinder, but for dog walkers.
Our friend's intent, when describing their startup, is to describe a viable business model. Our interpretation of the metaphor, the perspective we adopt, must satisfy that intent. Likewise, when we propose a change — adding a queue, for instance — the intent is to improve our software. This intent, paired with domain expertise, carves away the ambiguity. The metaphor becomes specific; it tells us what we need to create.
This domain expertise builds atop the broader expertise we've developed through a lifetime of communication. In his influential paper Logic and Conversation, H.P. Grice described what he calls the "cooperative principle" that underpins every conversation:
Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction.
Our words, paired with some intent, generate unstated implications. And within every conversation, there is a cooperative intent:
Suppose that A and B are talking about a mutual friend C, who is now working in a bank. A asks B how C is getting on in his job, and B replies, Oh, quite well, I think; he likes his colleagues, and hasn't been to prison yet.
Our natural assumption is that B is cooperating with A; their intent is to answer the question. We must, then, find a reason for why B felt this was a useful response. Are they implying that C is prone to illegal behavior? His colleagues? Bankers as a whole? The context, in a fully cooperative conversation, should remove any lingering ambiguities.
Our explanations, then, are rarely self-explanatory. The implications are left as an exercise for the audience.
There are a number of ways this can go wrong. Our intent could be unclear. Our audience could be unable, or unwilling, to work out the implications. Or, perhaps, the explanation itself could be flawed.
But these risks exist in any collaboration. A high-level explanation, like the Uber-metaphor, is an invitation to apply our expertise. Even the person using the metaphor must, like everyone else, reason through the implications of their own words.
-
My doctor, when he arrived, was riding an electric skateboard. ↩
This post is an excerpt from my (incomplete) book on software design. For more about the book, see the overview.