I. Can a computer think?
Can a computer think? It is imperative for us to answer this question, for this would not only determine the ethical standards in an AI infused future[1], but also shed light on the nature of thoughts and the legitimacy of physicalism. In the famous paper Computing Machinery and Intelligence, Turing has defined a thinking computer to be one that succeeds the Imitation Game, which was later termed “the Turing Test”. In this paper, I will briefly summarize Turing’s thesis, I will then supplement arguments for why the gap between our mind [2]and a computer is not sheer as one may think.
II. Turing’s thesis
Turing proposes that if we were to answer the question “Can machines think” by defining “machine” and “think”, we will just get into a fruitless quarrel. Instead, we should replace this question with an equivalent inquiry: can a machine pass “the imitation game[3]”? We, the complex biological machines, are not what this question is interested in. Rather, here the term “machine” is to be understood as any imaginable computing device, such as a digital computer. I shall use the word “computer” in this paper to mean digital computer in the common sense, not human beings (though that equivalence is the thesis of this paper).
I think Turing’s thesis can be understood in two ways. One way to understand him is through the lens of a pragmatist, which simply aims to employ a practical standard to judge whether a computer can think. Call this view the Weak Turing Thesis: A thinking computer is one that can behave exactly the same as a human mind. In this view, we do not have to be boggled about the nature of thinking and where consciousness is in the picture. I speculate that this is the view that Turing has adopted, given his comment “I do not wish to give the impression that I think there is no mystery about consciousness… But I do not think these mysteries necessarily need to be solved before we can answer the question [whether a computer can think] we are concerned with in this paper” (Turing, 447).
Another way to understand it is the Strong Turing Thesis: A thinking computer is one that has exactly the same the nature of thinking with a human mind. The consequence of the strong thesis is that if a computer passes the Turing Test, we will have to grant that its thinking is a human-type-of-thinking. I believe that most of us are willing to grant the weak thesis, but not the strong one. In the next section, I will respond to Searle’s objection to the strong Turing thesis, that understanding is missing from the “thinking” of a computer.
III. What is the difference between our mind and a computer?
In the last section of the paper Turing proposes that a realistic way to actualize such an imagery thinking machine is to build it resembling a kid’s mind, and then teach and train it to grow. Turing has remarkably predicted a future where machine learning can simulate a learning mind. What he did not do in his paper was to go the other direction by showing how our mind is similar to a machine learning algorithm.
Turing admits at the start of his paper that this is not a discussion about meaning and definition of “think”. He implicitly turns to pragmatism on the nature of thinking by defining a thinking machine to be one that succeeds in the imitation game. There are two types of objections that can come about. One type is against the Weak Turing Thesis, which has to prove that there exist functional and practical differences between human thinking and a machine computing that are not captured by a Turing Test. But as we discussed above, we need not limit our Turing Test to the form of an imitation game. A Turing Test in general is one where a person cannot tell another person’s mind from a machine. This by definition is a comprehensive standard to conclude a practical similarity between the two. Therefore, in the rest of this section, I will attempt to support the Strong Turing’s thesis by discussing why our mind is more similar to a computer than we think even if we do not yield to pragmatism. In the rest of this section, I will try to defend an argument that may be summarized as below:
P1: If something-beyond-computation (SBC) is something our mind possesses, it should have an overarching monitoring function which requires that it is always present even when we do not consciously ask of it.
P2: If SBC is always present, we should be able to invoke it in any case. =
P3: We are not able to invoke in some cases.
Conclusion: SBC does not exist.
If there is such a fundamental difference between our thoughts and the computation of a machine, let us call this something SBC (something-beyond-computation). What feature does SBC possess? SBC needs to possess some feature that a simple computation cannot possess or encode, otherwise SBC would not be beyond computation.
Think of a program that computes addition of two numbers a and b. It always erroneously outputs a + b – 1 instead of a + b. To correct such an error, we need to examine the program and fix the lines of code that led to this result, because such a program does not possess a self-correcting feature. Now consider a kid who answers 27 when asked “what is 13 plus 15?”. We ask the kid to think again, and the kid comes to realize he was wrong, and change his answer to 28. For one who does not wish to buy into Turing’s thesis and equate the kid’s correction of his error with a correction of algorithm or hardware, one has to agree that it is an improved understanding and introspection that led to the correction. One would also typically admit that concepts such as “improved understanding” and “introspection” are beyond computers and unique attributes of consciousness. In other words, whatever we have that is beyond computation should be able to correct and monitor our computations. A reasonable feature we can attribute to SBC is a monitoring feature--- if there is such a thing as SBC involved in our perceptual and computational process, it includes at least a sanction and examination of our computation/thinking.
The above hypothesis can be formalized this way: if there does exist something that distinguishes us from computers, it must include introspection which can examine, correct or at least influence our computation in every case. Otherwise, it shares the same nature as computation.
Now let us look at image 1. Does grid A and B share the same color? If you have never seen this picture before, you will most likely say “No, B is a lighter shade of grey than A”. However, these two grids share the exact same RGB value rendered on a computer screen. Image 2 is the proof. Now you know that, take a look again at image 1: you still see A and B as different. You cannot correct this simple perceptual process. Something has gone wrong in this process in a similar way with the program that outputs a + b – 1 for addition of a and b. In this process, it is obvious that all we had was computation. There is nothing that we can do to influence or check the result. Our mind becomes a function that outputs deterministically, given a set of inputs. Such a predictable and unpreventable failure in our visual perception makes it obvious that SBC cannot do much in this case. We are hardwired in producing certain responses. It is a challenge to find where any SBC lies in a process as hopelessly erroneous as such.
Admittedly, the above analysis merely gives a nonequivalence claim---that SBC is not present in some cases. It does not show that SBC is missing in all cases. In other words, we have not completely proven premise 1. To respond to this objection, I propose we cannot have some cases where SBC is completely missing, yet still have a complete SBC.
SBC’s existence equates its constant presence---there is no reason for it to be present in some cases but not in others. This is analogous to that if you have eyes and one of the necessary function of eyes is to enable vision, you must be able to see or at least choose to freely enable or disable seeing by opening or closing your eyes. And if you cannot do that, you would not argue that your eyes are just fine because sometimes you are able to open them. What you have is less than eyes. A complete concept of eyes requires its constant ability to see, and SBC requires constant ability to monitor computation[4].
Additionally, it is not only potentially inaccurate, but unfalsifiable to claim that SBC even has an error correcting function in even some cases (which is to say that there is not SBC at all). If I did not point out that grid A and grid B indeed share the same color and convincing you of that with image 2, you simply would not know that they do without further input. That may just be the case for every perceptual output we arrive at, for most of the time we do not examine them with such rigor. There may be nothing left and no SBC if we take away our computation. As long as one still admits that introspective correction is a necessary feature of any SBC, one has to see the above example as a tip-of-an-iceberg type of problem. The best explanation may just abandon the illusory concept of SBC all together, and simply admits that all we have is our perception as an imperfect yet encodable algorithm wired up with physical materials.
What does the non-existence of SBC imply? If the equivalence between our mind and a computer can be established, we gain convenience of answering many difficult questions, such as “why do you like song A”, and “Why do you believe in A”, etc. Consider the most basic ML algorithm that can recognize puppy pictures. When asked what that recognition comprises of, an answer can contain no more than an algorithm that takes in all data (which in no way include all puppy pictures), training hardware, and training data, since with this information we can reproduce the exact result. Neither is anything more necessary. So why are we so convinced that when asked “why do you prefer song A”, our answer shall contain anything more than the physical structure of the brain, the sound perception algorithm, and all the songs we’ve listened to in the past?
IV. Conclusion
In this paper, I have attempted to show that the Strong Turing Thesis is not as absurd as it seems at first glance, since our understanding of something-beyond-computation may just be an illusion. Future research should focus on the practical implications of a Strong Turing Thesis.
Reference
Turing, A.m. “Computing Machinery And Intelligence.” Readings in Cognitive Science, 1988, 6–19. https://doi.org/10.1016/b978-1-4832-1446-7.50006-6.
Acknowledgement
I thank Professor Wayne Wu for his patient guidance in my initial topic selection and his offering to help during the revision process. His incredibly interesting lectures on Turing and John Searle was also one of the reasons why this paper came to be.
[1] AI stands for artificial intelligence.
[2] Explain in this paper, mind = brain + “consciousness”
[3] “The imitation game” is set up in the paper such that the object of the game for a human interrogator is to determine whether “X is A and Y is B” or “X is B and Y is A”, conversing with two subjects X and Y. For example, let X be the computer and Y be a woman. The computer then will try to fool the interrogator into thinking that it is the woman. If it succeeds in doing so, we will deem the computer to have possessed the ability to think. Although, we do not have to limit ourselves to such a specific set up. A generalized imitation game, which we call the Turing Test nowadays, can be describes as such: a test in which a human cannot tell whether an object in a Blackbox is another human or a computer by conversing with it.
[4] Could it just be that a “good” SBC requires constant ability to monitor computation, but a regular SBC just requires some presence? I think not, it makes no sense to case on the functional extent of SBC. If anything of its sort exist, it should be a binary concept that either exists and have such desired property or not
The vision analogy may be too fine grained to correspond to the conception of SBC.
Comments