Is the Brain a Computer and the Mind a Program?
By Olivia Gawel
Is the line between the computer and the brain becoming blurred? This essay attempts to address this question by taking on two opposing approaches: the former being that the brain is a computer, and the latter being the brain is not a computer. Both analyze human cognitive abilities such as thought, language and intentionality while discussing a hypothetical computer and its mannerisms for the purposes of conceptualization, and in the end, hopefully shedding some light on how to choose the ‘pew’ to sit in.
Taking the first view goes hand in hand with the Computational Theory of Mind (CTM): its central idea being that the brain acts as a computer, whilst the mind is a computer program, thus characterizing cognition as a form of computation (McCulloch and Pitts, 1943; Putnam, 1960; Fodor, 1975). The main tasks of CTM are to determine what the computing system’s functional architecture is and how such abilities are executed through the system (McLaughlin, 2004).
One of the very first ideas regarding computation and cognition concerns the performance of the building blocks of the brain and their similarities to that of a computer’s. According to this idea, “neural activity is computation and neural computation explains cognition” (Piccinini and Bahar, 2013, p. 454; McCulloch and Pitts, 1943). A longstanding debate, however, has been whether neural computation happens as an analog system “representing signals continuously”, or rather as a digital system “representing signals in the timing of pulses” (McCulloch and Pitts, 1943; Piccinini and Bahar, 2013). This debate has played an integral role in perspectives of the brain as a computer, since it determines the type of computer one should take into account. At the onset of the new millennium, neural computation is now being considered a combination of both analogue and digital neural coding (Shu et al., 2006; Clark and Häusser, 2006; Zbili and Debanne, 2019).
Were the brain a computer, this would indicate that a computer can process and perform cognitive abilities such as thought. According to CTM, “thinking is a computational process involving the manipulation of semantically interpretable strings of symbols which are processed according to algorithms” (Schneider and Katz, 2011, p. 154). Dennett (1998) holds that if any computer is capable of passing the Turing Test, then it should be considered as a “thinking thing”.
Based on this idea, let us take part in a thought experiment by imagining that an anthropomorphized computer named HarVard has passed the Turing Test, proving that it is a ‘thinking thing’. Since HarVard can produce internalized thought, can it consequently externalize it by producing, comprehending and processing language? Being able to distinguish and determine the grammaticality and meaning of sentences in human language illustrates the fact that constructing and manipulating language is a computational procedure by means of maneuvering a complex set of rules that in the end create meaning with sound (Carter, 2007; Chomsky, 2011). If HarVard can externalize its thoughts through language, then it would be able to communicate with a human being, having a multitude of topics to cover, ranging from the issue of climate change to having an existential crisis. Perhaps it could even throw in a few metaphors here and there.
Presuming it can think and manipulate language, would it be able to express intentionality - our last piece of the puzzle? Although there is much debate on the nature of intentionality, this pivotal element must be portrayed at least elementarily (Montague, 2007; Millikan, 1984). One notion in this framework is the “intentional strategy” or “intentional stance” which will be taken to decide whether HarVard would be able to express its desires, needs, intentions etc. - just as every human being does so habitually and effortlessly (Dennett, 1981).
Let us first establish the setting. HarVard and a test subject are playing a game of online chess. The tension is high, palms are sweating and circuitry is humming - it is the match of the season. Since it has been established that HarVard is capable of thought, it is thinking whether or not to take the subject’s ‘knight’. However, it knows that once the subject’s ‘knight’ is gone, it can say goodbye to its ‘rook’, ultimately losing the game (Dennett, 1981). If HarVard thought about not taking the subject’s knight, does that mean it has intentionality? Well, it did not make this move because it knew it would lose, and in so doing, it accomplished what it wanted to do.
This school of thought can make it seem that the brain and the computer have much in common. However, there are still many cracks to be filled in the new era of machine learning.
Or, perhaps, these cracks will never be filled. In this case, the opposing approach seems plausible: the brain is not a computer.
As mentioned earlier, computation is a pivotal piece of this puzzle. One of the most vital arguments in line with this approach compares consciousness with computation. According to Tallis (2011, p. 197) the workings of a computer are portrayed as follows: “the passage of vast numbers of small currents through vast numbers of microscopic circuits”. This fact alone cannot possibly explain consciousness fully as just a calculation of numbers (Tallis, 2011).
According to two major proponents of this approach, Searle (1980) and Tallis (2011), a computer cannot truly think. Although it is made up of extremely complex electronic circuits, it has only been designed in such a way as to behave or be interpreted as if it were thinking but is only observer-relative. With regards to the Turing Test, although a computer may be capable of passing it, that does not suggest that it can think, nor does simulating the behavior of thinking necessarily mean that it is a conscious being (Tallis, 2011). In connection to the hypothetical computer playing chess in Tallis’ view, HarVard could not have performed thought, nor did it make its chess moves in a particular way based on wanting to do so. Since it is not a human being, it could not have acted as one: executing its intention. Presuming that HarVard cannot think, could it still be able to use language and thus communicate with other beings?
One of the central arguments within this approach refuting strong AI is a thought experiment called the ‘Chinese Room’ Argument (Searle, 1980). Essentially, a computer system and the rules that it operates are solely based on the computer program. Therefore, computers are purely syntactical devices and only work via syntactical operations, lacking semantics and intentional states (Searle, 1980). Under these assumptions, even if HarVard were able to produce metaphors or share its beliefs, it would only do so on a communicative level, not fully grasping the depth of meaning. Moreover, the Chinese Room Argument encompasses another fundamental element, the Symbol Grounding Problem: the computer would maneuver the provided symbols based on shape rather than meaning, thus not fully understanding the depth hidden behind each symbol. Hence, HarVard could not interpret these symbols intrinsically by deriving the meaning of symbols directly from the mind (Harnard, 1990). In this sense, symbol manipulation cannot solely account for the depth of cognition (Harnard, 1990).
The on-going debate is still under much scrutiny. Many have taken a stance. Both approaches seem to present convincing arguments, making it difficult to choose which side to bet on. Cognition, thought, language and intentionality have been portrayed in an attempt to understand the differences between each modality performed by a hypothetical computer. Taking both sides into consideration, what can be agreed upon is that both the brain and computer have one of the most fundamental and ironically basic functions: processing, whatever the extent and complexity may be. That is why the focus on the question of whether the brain is a computer or not, perhaps, should be shifted to determining what the similarities and differences are in processing any form of information. By doing so, a new discussion could allow for re-shifting one’s focus from which side is the right side, to how can both perspectives help the human species answer another fundamental question: how can computation help in understanding how the mind emerges from the brain?
About the Author
Olivia Gawel is a student at the University of Barcelona studying Cognitive Science (graduation year: 2021) and a student at University of Pompeu Fabra studying Cognitive Neuroscience (graduation year: 2022).
References
Carter, M. (2007). Minds and Computers. An Introduction to the Philosophy of Artificial Intelligence. [Electronic version]. Edinburgh University Press.
Chomsky, N. (2011). Language and Other Cognitive Systems. What Is Special About Language? Language Learning and Development, 7(4), 263-278.
Clark, B., & Häusser, M. (2006). Neural coding: hybrid analog and digital signaling in axons. Current Biology: CB, 16(15), R585–R588.
Dennett, D. C. (1981). True Believers: The Intentional Strategy and Why It Works. In A. F. Heath (Ed.), Scientific Explanation: Papers based on Herbert spencer lectures given in the University of Oxford. (p. 150-167). [Electronic version]. Clarendon Press.
Dennett, D. C. (1998). Brainchildren: Essays on Designing Minds. [Electronic version]. The MIT Press.
Fodor, J. A. (1975). The Language of Thought. [Electronic version]. New York: Crowell.
Harnard, S. (1990). The Symbol Grounding Problem. Physica D, 42, 335-346.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biology, 5, 115-133.
McLaughlin, B. P. (2004). Computationalism, Connectionism, and the Philosophy of Mind. In L. Floridi (Ed.), The Blackwell Guide to the Philosophy of Computing and Information (p.135-151). [Electronic version]. Blackwell Publishing.
Millikan R. G. (1984). Language, Thought, and Other Biological Categories. New Foundations for Realism. [Electronic versions]. The MIT Press.
Montague. M. (2007). Against Propositionalism, NOUS, 41(3), 503-518.
Piccinini, G., & Bahar, S. (2013). Neural computation and the Computational Theory of Cognition. Cognitive Science, 37(3), 453–488.
Putnam, H. (1960). Minds and Machines. In S. Hook (Ed.), Dimensions of Minds. (p. 138-164). New York, USA: New York University Press.
Schneider, S., & Katz, M. (2012). Rethinking the language of thought. WIREs Cognitive Science, 3(2), 153–162.
Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424.
Shu, Y., Hasenstaub, A., Duque, A., Yu, Y., & McCormick, D. A. (2006). Modulation of intracortical synaptic potentials by presynaptic somatic membrane potential. Nature, 441(7094), 761–765.
Tallis, R. (2011). Aping mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. [Electronic version]. Acumen Publishing.
Zbili, M., & Debanne, D. (2019). Past and Future of Analog-Digital Modulation of Synaptic Transmission. Frontiers in Cellular Neuroscience, 13, 160.