“There are persons of words and persons of actions.” Such folk wisdom seems common sense – that there are people who merely talk and people who tend to act – comes from the real experience of people who try to present their so-called intelligence and people who live it. That common sense holds until a philosopher like Jürgen Habermas writes a two-volume monster of a book called A Theory of Communicative Action (Volumes 1 and 2, no less), one of the most influential sociological works of the twentieth century. Suddenly, things get done in strange things called speech acts. At least it seems sudden if you haven’t heard of John Searle. If you don’t read the annals of analytic philosophy over coffee in the morning, then this may seem all a bit new. Moreover, and more popularly, Hannah Arendt entirely blurred the distinction between speech and action, even calling speech a form of action. In fact, philosophers and sociologists, not to mention novelists, have long understood the roots of democratic citizenship, the legal system, and even personal identity to be intimately connected to language. Once you think of language this way, it is hard to think of it any other way, and the doors of understanding open a bit wider. It then makes sense that the way language evolves would have a great impact on our identities. Thus, having some understanding of the evolution of language may tell us about the evolution of our identities.
Previously, I have claimed that identity is shaped amidst significant others – that we do not “pick ourselves up by our own bootstraps.” It seems that identity is not alone in this being influenced by who is paying attention; our language is also. So how does the relationship between speakers and listeners impact the evolution of a language? Maybe this can also tell us something about the impact of others on our identity.
The Frequency and Clarity of Words
Some words are much more frequent than others. For example, in a sample of almost 18 million words from published texts (according to the Cambridge English corpus), the word “can” occurs about 70,000 times, while “souse” occurs only once. But “can” doesn’t just occur more frequently, it also is much more ambiguous; it has many possible meanings. “Can” sometimes refers to a container for storing food or drink (‘He drinks beer straight from the can’), but it also doubles as a verb for the process of putting things in a container (‘I need to can this food’), and as a modal verb about one’s ability or permission to do something (‘She can open the can’). It even occasionally moonlights as a verb about getting fired (‘Can they can him for stealing that can?’), and as an informal noun for prison (‘Well, it’s better than a year in the can’).
This multiplicity of possible uses raises a question: how do “can,” “souse”, and other words each end up with the particular numbers of meanings they have? The answer could rest in fundamental, differing forces that cross-pressure the evolution of languages, and may shed some light on the evolution of our identities along the way.
Once you consider it, the relationship between word frequency and word ambiguity goes well beyond “can” and “souse.” In 1945, the linguist George Kingsley Zipf noticed that frequent English words are, on average, more ambiguous than less frequent words. This pattern is surprisingly robust across languages – so much so that it is sometimes described as Zipf’s meaning-frequency law. A leading explanation for this phenomenon – first proposed by Zipf himself – begins with the premise that languages are in some sense evolved to make communication more efficient. According to this view, language change is (roughly) analogous to biological evolution: just as biological species are shaped over successive generations by the demands of their environments, languages are constrained by the needs of the people who use them. In particular, languages may evolve to minimize the effort required to convey information.
At first sight, this theory seems like common sense. Presumably, languages would not change in ways that make them less effective. Yet the picture is complicated by the fact that communication is a two-way street. Communication requires a sender (the person trying to convey a message) and a receiver (the person trying to understand it). And, critically, what’s effective for senders isn’t always what’s most efficient for receivers.
All else being equal, a producer likely prefers utterances that are shorter and easier to say: why go to the trouble of saying 10 words where one would do? This experience is likely familiar to all of us. Rather than enumerating every minute detail of an event, perhaps we use a simpler but vaguer expression, such as ‘He was there today.’ A vaguer expression places the burden of inference on the receiver, who may prefer a more precise formulation: ‘Simon, my ex-boyfriend, came into the coffee shop where I work today.’
Zipf argued that these competing interests would manifest in the very structure of the lexicon. A producer’s ideal language (in terms of the effort required) would simply be a single word. In this language, the word mo could communicate everything from ‘A coffee, please’ to ‘The capital of France is Paris.’ As one might expect, such an arrangement would demand much of receivers: every linguistic encounter would effectively be an exercise in mind-reading.
A receiver’s ideal language, by contrast, is one in which every meaning is conveyed by a different word, minimizing the possibility of confusion. Combined, the opposing forces created by the needs of speakers and receivers – forces that Zipf called unification and diversification, respectively – lead to trade-offs. Languages, then, must reach a compromise.
This is where Zipf’s meaning-frequency law comes in. According to Zipf, this law is a product of that compromise. We have many more words than mo, which partially satisfies the receiver’s need for clarity. But many of those words – especially the most frequent ones – can be used to express more than one meaning, which benefits producers. Put another way: the opposing forces of diversification and unification work against each other, resulting in Zipf’s meaning-frequency law.
In the realm of the evolution of identity (if my analogy holds), then the cross-pressure of the individual and the other person being related to (whom I shall call the “interlocutor”) might be subject to something similar to Zipf’s meaning-frequency law. The increase of exposure to a particular person in dialogue ought to reduce the work the individual has to do in expressing themselves, in being themselves. However, if it is the first encounter, the individual who is producing in discourse will have to do more work to show precision.
In terms of language, however, this explanation is incomplete. Does the compromise between unification and diversification imply that these forces are equal in strength – or does one pressure exert a stronger pull than the other?
Some language scientists have argued that certain aspects of language structure, such as grammar, are shaped primarily by a producer-centric pressure to make things easier to say. Given the effort it takes to produce language – one must ultimately translate the concepts they wish to convey into a complex series of motor commands – it would make sense that producers will take the easy option where possible and that grammar would evolve in ways that ensure an easy option is typically available. For example, English grammar gives speakers the freedom to begin a sentence with different referents (e.g., either ‘A cat scared the girl’ or ‘The girl was scared by a cat’) depending on which one is more mentally salient to a speaker at any given time.
If you are multi-lingual, you will readily know that certain languages, because they are native to certain environments, will evolve in reference to the environment. I speak some Thai. And in Thai, there are over 150 ways that are used, with enough frequency to appear in Thai idiomatic dictionaries, to describe the feeling of being hot. In Canada, I am certain we have almost as many ways of describing feeling cold. The context in which a language is used will have an enormous influence on the language itself.
Less is known, however, about the sender-receiver cross-pressure on the lexicon, when the receivers share roughly the same context. Finding out whether one pressure dominates word meanings requires a neutral baseline. That is, we need a sense of how many meanings each word should theoretically have in the absence of sender-centric or receiver-centric pressure. Once this expectation is established, one can compare it with real data – how many meanings each word has. If frequent words such as “can” have more meanings than the baseline expectation, it suggests that sender-centric pressure is stronger. And if words such as can have fewer meanings than you would expect them to, it suggests that a receiver-centric pressure is stronger. This is the logic guiding this thought experiment
My guess is that when senders and receivers are forced to compromise, receivers will walk away with a slightly better deal. My hunch is based in that I send a lot of information, both in a spoken form as a teacher and administrator, and because I regularly write. A good writer knows that they must adjust their lexicon to their intended audience. In other words, the receiver’s needs often determine which words are sent.
Likely, the most significant step in achieving a baseline that could be tested against is determining a neutral baseline that does not privilege either a receiver’s or a sender’s perspective. An excellent candidate approach is to assign an expected number of meanings to each word on the word’s phonotactic probability. Each language has rules about which sounds can start and end a word, which sounds can occur in which sequence, and so on. For example, modern English words are not allowed to begin with the onset ng–, but words in Thai are. Because of these patterns (or phonotactics), within any given language, some words are more probable than others: they contain sequences of sounds that are more commonly found in words of that language.
The phonotactic probability of a word can be calculated using something called a Markov Model, which looks at all the words in a given language and determines which sequences of sounds are the most and least likely to have appeared in that language. From there, calculating the number of meanings a word should have in neutral conditions was straightforward: one could multiply its phonotactic probability by the total number of meanings available to words of that length.
I tried it out, discovering that frequent words such as “can” – despite already being quite ambiguous due to their number of meanings – often had fewer meanings than the baseline predicted. Although I have not completed anything like an exhaustive study, this pattern generalized across much of the English lexicon, and to other languages: Korean, German, French, and Thai. In each language, frequent words – though ambiguous – were less ambiguous than one would expect based on their phonotactics. This is most consistent with a receiver-centric pressure winning out. In this case, it seems that while senders and receivers would be forced to compromise, receivers walk away with a slightly better deal.
So if this rather sketchy test is indicative of the role of the receiver in the use of words, we might also intuit from this that the identity of an individual will be more fixed by the perception of another.
From one perspective, this finding makes perfect sense. If frequent words were too ambiguous – if can had 100 different possible meanings – then receivers would constantly encounter an overwhelming amount of ambiguity, so much so that it might impede communication altogether. Likewise, an identity of an individual had ambiguously numerous possible variations, the interlocutors could not actually know the person in question.
Yet it’s important to note that this result was not obvious from the get-go. It runs counter to other theories about why languages look the way they do. As noted earlier, producing language has challenges of its own, which is why some researchers argue that grammar is sender-centric. For similar reasons, one might expect these difficulties to result in a lexicon that privileges speakers: a small number of words that are very easy to retrieve and produce, each loaded with many meanings. This makes it even more striking that pressure to avoid ambiguity is favored in the design of human lexica.
How do individual communicative interactions bubble up to affect the very structure of the lexicon?
Moving forward, language scientists can try to replicate this result in a larger sample of languages, including those from language families such as Niger-Congo or Austronesian. They can also ask how this pressure relates to previously observed examples of ambiguity avoidance. For example, recent work found evidence of ambiguity avoidance in historical sound change. Sometimes, different sounds in a language ‘merge’ over time, meaning that they are no longer treated as distinct (for example, cot and caught are now pronounced the same way in some English dialects). Yet, according to Andrew Wedel and colleagues, mergers are less likely when they would create many homophones in a language – a prime example of how ambiguity avoidance might shape processes of language change.
But let’s not overemphasize this one cross-pressure. Lexical terms are influenced not only by the sender-receiver cross-pressure. There are many other environmental influences (like the multi-lingual example I mentioned above) that exert their own pressures on the language.
In terms of human identity, however, the tendency for identities to be more greatly influenced by the interlocutor than by the individual herself would be very telling. It would mean that individuals are not as autonomous and sovereign as we might have thought.
Both the speakers of words and the doers of actions rely, in large part, on the interlocutors to whom they relate and the environments in which they exist. This could seem tyrannical to some people, or just a benign fact. If common sense and superstition are two extremes on a knowledge spectrum, then a hard distinction contained in the folkish idea of speakers and doers is cozying up to superstition. In fact, it may be the very neighbor on that spectrum with the mythical beast still struggling with its bootstraps.