Why do we talk about computers having a brain (and why the metaphor is wrong)

It is a universally acknowledged truth that machines are taking over. What is less clear is whether the machines know this. Recent claims by a Google engineer that the LaMBDA AI Chatbot may be aware international headlines and sent philosophers in a tizz. Neuroscientists and linguists were less enthusiastic.

As AI advances, the debate over technology moves from hypothetical to concrete and from the future to the present. This means that a wider range of people – not only philosophers, linguists and computer scientists, but also policy makers, politicians, judges, lawyers and legal scholars – must form a more sophisticated view of the AI.

After all, how policymakers talk about AI is already shaping decisions about how to regulate this technology.

Take, for example, the case of Thaler v Commissioner of Patents, which was launched in the Federal Court of Australia after the Commissioner of Patents rejected an application naming an AI as an inventor. When Beech J. disagreed and granted the application, he made two findings.

First, he concluded that the word “inventor” simply described a function and could be performed either by a human being or by a thing. Think of the word “dishwasher”: it can describe a person, a kitchen appliance or even an enthusiastic dog.

Nor does the word “dishwasher” necessarily imply that the agent is good at their job…

Second, Justice Beech used the metaphor of the brain to explain what AI is and how it works. Reasoning by analogy with human neurons, he discovered that the AI ​​system in question could be considered autonomous, and could therefore meet the requirements of an inventor.

The case raises an important question: where does the idea that AI is like a brain come from? And why is it so popular?

AI for mathematicians

It is understandable that people without a technical background may rely on metaphors to understand complex technology. But we hope that policy makers will develop a slightly more sophisticated understanding of AI than what we get from Robocop.

My research focused on how law scholars talk about AI. A significant challenge for this group is that they are often mathophobic. As a lawyer Richard Posner argue, the law

offers a haven for bright young people who have a “math block”, although this usually means they avoid math and science classes because they could get better grades with less work in the verbal areas.

Following Posner’s ideas, I reviewed all uses of the term “neural network” – the usual label for a common type of AI system – published in a set of Australian legal journals between 2015 and 2021.

Most of the articles have tried to explain what a neural network is. But only three of the roughly 50 papers attempted to address the underlying mathematics beyond a general reference to statistics. Only two papers used visual aids to aid in their explanation, and neither used the computer code or mathematical formulas at the heart of neural networks.

In contrast, two-thirds of the explanations referred to “mind” or biological neurons. And the overwhelming majority of those who made a direct analogy. That is, they suggested that AI systems actually replicate the function of the human mind or brain. The mind metaphor is clearly more appealing than engaging with the underlying math.

It’s no wonder, then, that our policy makers and judges – like the general public – make such heavy use of these metaphors. But metaphors lead them astray.

Where does the idea that AI is like the brain come from?

Understanding what produces intelligence is an old philosophical problem that was eventually taken up by the science of psychology. An influential statement of the problem was made in William James’ 1890 book Principles of Psychologywho entrusted the first scientific psychologists with the task of identifying a one-to-one correlation between a mental state and a physiological state in the brain.

Working in the 1920s, neurophysiologist Warren McCulloch attempted to solve this “mind/body problem” by proposing a “psychological theory of mental atoms”. In the 1940s he joined the influential biophysics group of Nicholas Rashevsky, which attempted to apply the mathematical techniques used in physics to the problems of nueroscience.

Key to these efforts was trying to build simplified models of how biological neurons might work, which could then be refined into more sophisticated and mathematically rigorous explanations.



Read more: We’re told that AI neural networks “learn” like humans do. A neuroscientist explains why it doesn’t


If you have vague memories of your high school physics teacher trying to explain particle motion by analogy to billiard balls or long metallic slinkies, you get the general picture. Start with a few very simple assumptions, understand the basic relationships, and work out the complexities later. In other words, suppose a spherical cow.

In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons intended to explain the “heat illusion” phenomenon. Although this was ultimately an unsuccessful picture of how neurons work – McCulloch and Pitts later abandoned it – it was a very useful tool for designing logic circuits. Early computer scientists adapted their work to what is now called logic design, where naming conventions – “neural networks” for example – have persisted to this day.

The fact that computer scientists still use terms like these seems to have fueled the popular misconception that there is an intrinsic connection between certain types of computer programs and the human brain. It’s as if the simplified hypothesis of a spherical cow proves to be a useful way of describing how ball pits should be designed and leads us all to believe that there is a necessary connection between children’s play equipment and dairy farming.

It would be no more than a curiosity of intellectual history if it were not true that these misconceptions shape our policy responses to AI.

Is the solution to force lawyers, judges, and policy makers to skip high school math before they start talking about AI? They would certainly oppose such a proposal. But in the absence of better math literacy, we need to use better analogies.

Although the full Federal Court has since reversed Justice Beech’s decision in Thaler, he specifically noted the need to develop a policy in this area. Without giving non-specialists better ways to understand and talk about AI, we will likely continue to encounter the same challenges.

Comments are closed.