Duck Intelligence

If it walks like a duck and it quacks like a duck, then it must understand like a duck.
Some time ago, I had a brief debate with a YouTube user about using of the term "understanding" in reference to Large Language Models (LLMs). I tend to agree with that user’s argument that an LLM doesn't actually understand anything because it’s a machine. I’ve even written an article supporting this idea on the type of intelligence LLMs have. However, I believe that the question of whether something is intelligent or capable of understanding isn't fully resolved by analyzing it solely from a technical perspective. In other words, a seed of doubt remains, not so much about the machine's capacity to understand, but rather, about our own ability to "understand the understandable."
To Understand or Not to Understand?
Asserting a machine is incapable of understanding raises two issues:
A philosophical issue about the meaning of "understanding" and how we determine whether someone or something understands.
A linguistic issue regarding the semantic extension of the term "understanding."
Don’t get me wrong: I’m not claiming that machines can understand in the way we commonly intend. Rather, I’m inviting you to reflect on how we use the term "understanding" probably… without fully understanding it.
But let’s take things one step at a time.
The Philosophical Question of Understanding
First, let’s ask ourselves how we determine whether a machine understands (or, if you prefer, is intelligent).
In the article I mentioned earlier, I referred to Forrest Gump’s motto: "Stupid is as stupid does." When I first heard it, I didn't take it seriously. I thought it was just a bit of wordplay. After thinking about it for a while, however, I found it to be deeper than it initially appeared. Essentially, we only consider someone stupid if they act or speak in ways that we deem stupid. In other words, we judge their stupidity based on their behavior.
In programming, there is a similar concept called duck typing. This is a mechanism that assigns a type to objects based on their interface rather than their base class. The name "duck typing" is based on the assertion: "If it walks like a duck and quacks like a duck, then it must be a duck." In other words, if an object has all the characteristics needed for a certain purpose, then it is considered to have the right type for that purpose. The behavior or appearance we observe is fundamental to determining the nature of something.
The Turing test is based on the same principle: If a machine behaves intelligently to the point that it cannot be distinguished from a human, then it is considered intelligent. Therefore, if a machine behaves as if it has understood our requests, then it is capable of understanding.
We apply the Turing test to humans every day. Think about it for a second: How can I be sure that someone understands me? If someone behaves as I expect when I ask them for something, then I say they have understood.
The fact is, I have no other way to verify their understanding besides my own. I can’t analyze the processes occurring in their neurons to determine whether they truly understand or if it’s simply an automatic reflex. We assume that others understand the way we do because we attribute our own capabilities to them. I’m no expert, and I don't know if this relates to the Theory of Mind, but overall, it seems fairly… understandable. It’s much harder to do the same for a machine or another living creature, of course.
In short, we say a human understands the same way we do only because they share our psycho-physical characteristics. We don't have concrete proof, but there’s a good chance that’s the case. There’s no need to bring the Chinese Room argument into our assessment of other humans.
The Linguistic Question of Understanding
Now, let’s look at the linguistic side. When we say a machine is intelligent or "understands," we don't mean that its intelligence or understanding is identical to human intelligence or understanding. Remember duck typing? If a machine acts like it understands, then we say it understands. However, this doesn't mean that the same mechanisms triggered in a human are triggered inside a machine.
It’s simply anthropomorphizing the machine's behavior. We use the same term for a machine as we would for a human. We’ve always done this, and we continue to do it. From a linguistic standpoint, it’s analogous to saying an airplane "flies." In reality, we all know airplanes don't fly the way birds do: they don't flap their wings. Yet, we use the same verb. We have extended the original meaning of the verb “fly” to include the movement of an airplane. However, we are all aware that this type of flight is completely different from that of a bird.
This is a semantic extension, or neosemanticism, which occurs when an existing word acquires a new meaning when applied to a different context.
We talk about "surfing" the Web, even though we know we aren't using a surfboard. How many times have we "migrated" data from one platform to another without ever dealing with flocks of birds or herds of bison? A computer "hibernates" even in the summer, and it is certainly not a bear or a groundhog.
Many of these words with semantic extensions are used in technical fields. The reason is simple: language cannot keep up with the speed of innovation. Rather than creating obscure words that might be hard to remember, we reuse existing words with an extended meaning.
Sure, we could use the term "data transfer," but you have to admit it’s much simpler and more poetic to say that data "migrates." Similarly, we could call it "mere statistical processing," but it’s much more evocative to say that AI "understands."
Let's view semantic extension as a kind of metaphor. It helps us relate a new concept to something we already know, while recognizing that the two are not identical. This is the beauty of being human.



