I recently came across this article by senior research psychologist Robert Epstein, titled “The Empty Brain”. It was linked on a critical Facebook posting that was upset about the article’s superficially most important point appearing to be that the brain is not a computer. Ironically, this Facebook poster had become upset about Epstein’s wording and got stuck with the verbal term, not seeing the actual thought behind the phrasings.
I started reading the article and found myself initially with the same ire as the Facebook poster had. Fortunately I have experience in looking past my first reactions. It looks like Epstein uses provocatively roughly formed language as a tool to try and shake people awake from being satisfied with how they are looking at the world through tainted glasses. His point would not be that the brain couldn’t abstractly be considered as a computer. As I understand, Epstein’s point is the same as in the old proverb: “When all you have is a hammer, everything looks like a nail.” Here I am, of course, on a bad footing, being first a computer scientist, who only has a minor subject and a keen interest in psychology and awareness. My hammer and nail is the computer, so I should be biased to see everything, including the brain as a computer.
Epstein’s point appears for me to be that there are certain design patterns applied in computer engineering today, and when people talk about the brains as a computer they do not sustain sufficient level of abstraction, but stray into thinking that these design patterns apply to the brain-computer as well. This means that when we have a memory of something, we slip into thinking the mechanisms of this inside the brain being the same as in modern computer devices: We imagine somewhere in our brain there to be a set of neurons or charges that are dedicated to storing this information. This kind of thinking was quite recently enforced in me during a lecture by a researcher of psychology, who explained that some neuroscientists had discovered on a research subject person a Tom Cruise neuron. This neuron fired if, and only if the subject was shown an image where Tom Cruise appeared. It is easy to consider this neuron being part of a memory area in the brain dedicated to remember what Tom Cruise looks like, which is one of a very valid points Esptein argues against.
Epstein appears to attack vehemently against this particular analogue, but that might be just my misconception, where he only takes memory as a case example. Epson writes: “Here is her drawing ‘from memory’ (notice the metaphor)” For me this seems like going too far, but that may be, because I’m not a native English speaker. I do know that there’s a phrase “from the heart” that might be a preferred idiom. However, in my native language, Finnish, the phrase would be “ulkomuistista”, which would translate directly somewhat like “from outer memory”, suggesting the use of memory outside the help of textbooks or anything else. Even if the brains are better in identifying images than recalling them, there still is some mechanism from where a rudimentary recollection of an image can be drawn. Also, it would be interesting to hear Epstein’s thoughts on people who claim to have photographic memory. Here I think Epstein saw a nail where there really isn’t one, but that is a minor error that actually just goes to prove the point I perceive Epstein attempting to drive.
The problem of seeing nails everywhere is the same that agile methodologies tries to combat. Kent Beck and Cynthia Andres, the creators of eXtreme Programming wrote: “This is the paradigm for XP. Stay aware. Adapt. Change.”, “Even if I knew all the same gardening practices as [a master gardener], I still wouldn’t be a gardener. [A master gardener] has a highly developed sense of what is good and bad about gardening.” “You will benefit from studying and trying parts of XP.”, “If you just stop writing documentation and use XP as an excuse, you will be called on your behaviour by the community. Belligerently saying, ‘We don’t have to write documentation because we’re extreme,’ shows contempt for communication, not an embracing of communication as a value.”
Agile methodologies all promote constantly thinking about what you are doing, instead of thinking only at the beginning and then just hammering everything as was initially thought. It is slightly analogous to prejudices – basically Epstein’s topic. Thinking is heavy and takes time. Beings generally try to avoid it. When you really want to do something, you want to focus on doing it and not be distracted by other activities – one of the other potential activities being thinking. When you weed a garden, you can be quite efficient when you just concentrate on identifying a weed and plucking it off. If, for each plant you see you start analysing it’s usefulness in the garden, you will have to pause and use more energy in your brain, which is a waste, as you will every time come into the same conclusion that you should pluck the weed and leave the plants you are trying to grow.
As Epstein writes, this is what brains are evolved to be amazingly good at. We observe a visual stimuli of a plant. We associatively pair the visual stimuli with a classification of a plant as a weed or non-weed. This gives us a quick and energy efficient solution to guide our physical activity. However, in more complex cases our quick associations may be faulty. Our brain has a mechanism to tackle with this problem too. If we come across a plant that gives conflicting stimuli – say it has the correct size and shape, but wrong colour – our brain stops us to think more carefully. We become alerted.
Classification and prejudices are efficient for our brain. And not only in practical life, but also in science. One might even be bold enough to say that science is nothing but creating classifications and prejudices. Whenever we abstract things from the reality, we put together a bunch of individual things and claim them similar. We simplify. Classifications might be most familiar in biology. A pike is a fish. There are many other species of fish, and they are a class separate from mammals. All biological entities are classified in a taxonomy, which appears perfect, except that one then finds things like platypuses that defy the classification. Yet, the classification is highly practical, even if it’s not “perfect”. Yet, the problem with them is that when you keep thinking “inside the box”, you should remember Henry Ford’s quote: “If I had asked my customers, what they want, they would have answered: Faster horses.”
Prejudices come in many levels. The most unfortunate and best known of them is, when people generalize problems to easily identifiable targets and become hostile against those targets – racism. To assume one should do something oneself appears like more work than blaming others – faulty attempt towards energy efficiency. Finding easy identifiers on other people, such as skin colour, place of origin, language, etc. are all more easy than getting to know the other people and identifying the more complex real reasons behind problems.
I would say that it is okay to say that our brain is a computer, but we must not think of it as the typical contemporary digital computer. Rather, we have to understand in this analogy, that the brain is a computer in a more abstract way:
The brain is fed input which it stores – but the input is much more complex than a computer device receives, and the storing too happens quite differently. In computer devices, only electrical charges change. I’m in the understanding that in the brain the whole physical structure changes. A neural network is more similar to the functioning of a brain, but even the typical neural network doesn’t change it’s configuration, only weights. Furthermore, the neural network is not exactly the same as the brain, but only more similar than a more traditional information system design, just like a movie is more similar to real life than a photo – you still can’t interact with the people in it, and it can be displayed identically for an infinite amount of times.
The brain processes information and produces output – the output is even predictable based on the input, but not as well as with computer devices. Once again, neural networks are more close to the human brain here than anything else. They are taught and they adapt to reality that they perceive, and it’s very difficult to see how exactly the input is converted into output. Yet, the complexity with the modern technology level is still humongously smaller than the complexity of the human brain. Also the structure of a neural network neuron is only a stick figure of a real brain neuron with probably with violent shortcuts in the design.
One of the biggest problems with comparing the neural networks today to the workings of a brain is the discreteness of the networks. Each neuron is processed separately – one at a time, each after another. In the brain all neurons act at the same time – and here is once again the danger Epstein is warning us of: When I say “at the same time”, i do not mean “synchronously”, as one might be prejudiced to think of. The neurons do not all say: “one, two, three, GO”, and then do their processing at the exactly same time. Instead they can all be in any state in the process towards “firing a signal” quite independent of each other. The computer devices today have a central clock signal that synchronises the activity of each part of the device. That’s a difference that is unfortunately easy to forget.
As I have written in an earlier post, to create an artificial intelligence with, for example, neural networks, it needs to be a learning system inside a consistent world that it can observe and manipulate. Most systems these days are either programmed in with the programmer’s chosen presets, or taught with selected learning data.
The brain is born with at least algorithms, programs, models, memories, processors, subroutines, and buffers – this is where it could be considered that I disagree most strongly with Epstein’s article. The claim against this by Epstein is in conflict with the very previous paragraph: “Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.” Reflexes and learning memories are fully classifiable as algorithms, programs and subroutines. They are not stored in the nervous system (including parts also elsewhere in our bodies than the brain in the head) in punch cards, but neither are computer operating systems either. They are not a separable piece of software that could be taken out of the physical brain, but neither is the basic input output system (BIOS) of a computer device. As I mentioned before, the brain has a memory, and I have heard claims that it also has memories from the time during gestation. I don’t believe that neurons are started up to run only after birth. I find it very likely that first memories are formed as the first neurons are created. Certainly most all of these memories are effectively lost even before our birth, but like the butterfly’s wingflaps affect the global weather until the end of the world, also these first memories have effect in the future memories – next to non-existant, but still some – infinitesimal. Lastly, all neurons, and the brain as a whole can be considered as a processor – it processes input into output.
Certainly, considering these analogues increases our misconceptions of thinking the brain and mind too much as the computer system, and it is wise to remember, and be aware of having one’s mind locked inside a box when doing scientifical study. Epstein’s warnings are in place here, even if he might be pushing the point a bit beyond what is true.
Unfortunately it is the easiest to stick to extreme thinking. The second easiest is to switch between two opposite extremes. It is already better than the easiest thing, as one’s perspective is wider and one can see more options and alternatives. The hard thing is to find the middle ground that surrounds reality – the thing that exists wether you believe in it or not, to paraphrase Neil deGrasse Tyson.
Our knowledge of how the brain works is increasing through science. That science is done by people who have my full admiration, such as, for example, senior research psychologist Robert Epstein, and associate professor of psychology Michael Graziano. My point here has been to not refute Epstein’s article – just the very opposite, I have tried to explain it to you, and to myself, as I find that it has an important message, but it may provoke the target audience to dig into mental trenches of not listening to what is being said, because the wording is personally upsetting. I fear that my own softer conciliatory tone makes people think I disagree with Epstein’s article and agree with their misconceptions, which is why I need to state this explicitly at the end of my entry here. In the end, it is up to everyone’s own to really think about things, and their own thinking, and to find their way out of prejudices that hinder their activities – the underlaying point in all of this.
 Epstein, Robert: “The Empty Brain”. Aeon. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer (referred 5.8.2016)
 Beck, Kent and Andres, Cynthia: “Extreme Programming Explained: Embrace Change”. Addison-Wesley professional, 2nd edition, 2004
 Earlier posting in this blog: “(Artificial) Intelligence is not Possible Without Sensors and Manipulators”. https://tomibgt.wordpress.com/2014/11/16/artificial-intelligence-is-not-possible/
 Earlier posting in this blog: “Awareness of Awareness”. https://tomibgt.wordpress.com/2014/10/31/awareness-of-awareness/