Reactions to Readings


In Alan M. Turing's piece Computing Machinery and Intelligence, he raises a number of issues that concern not only his own personal methodology of how to construct an "intelligent" machine, but also, in a more global sense, how to concretely determine intelligence. The primary focus of his hypothesis of how to do this, from what I observed in reading not only this piece but also Douglas R. Hofstadter's piece A Coffeehouse Conversation on the Turing Test, is personal representation. If, according to the "imitation game" proposed by Turing, the computer can mask the fact that it is a machine, then this is a principal measure of the fact that the computer can indeed "think." This is taking Turing's argument at a simple and rather surface value, but it still encapsulates what I feel is one of his central points-the ability to represent oneself. This, to me, is a reasonable measure of "artificial intelligence" - as an example, if a person cannot tell the difference between a machine and a human, then the computer has successfully masked itself. Perhaps there could be subtle indications available to the trained eye that indeed the subject is a machine, but if the machine's human posturing is effective then I would most likely deem Turing's test successful.

This is a particularly interesting consideration given several aspects of modern computing. As an example, video game artificial intelligence has reached a peak so that, for certain games, a large portion of the development focus is on accurately simulating human players of differing skill levels. "Bots," as they are frequently referred to, can often fool the unaware player into thinking that they are battling other, cognoscent humans; however, there are still, as remarked earlier, indices that "give away" the computer to a trained player or viewer. Sometimes, given the context of a particular game, one can recognize more subtle indications that the opponent is a machine. Indices such as the characters lingering, completely idle, in the playing field until other players enter and the bots' routine has begun can easily "give away" the bots to the player. In most cases, however, this example of artificial intelligence is good enough to fool most players into thinking they are confronting foreign human behavior; nonetheless, they are still playing with programs. In this example, the instant validation of Turing's test holds true; however, we are not measuring whether or not the machine can "think" in Turing's terms. It can "think" in this particular context, the game, and fool a player only to a certain extent, but we have not yet seen that a "bot" can emulate human thought. This most nebulous aspect of Turing's argument is also perhaps the most important-if Turing's conception of "thought" were defined, if possible, in more concrete terms, his tests may be somewhat more plausible to contrarians, many of whose views were presented at once by Turing himself but also in Hofstadter's piece. I agree with Turing's proposed methods, but am also aware that, at this point, this type of test can only be carried out in isolated contexts, such as the video game example-Turing's test of "intelligence" seems still to require more definition to be fully, and properly, practiced.


Computing Machinery and Intelligence, by Alan Turing is in many senses a fascinating work, and one that demands careful reading. Although there are flashes of brilliance, however, there are also many points made of which I am skeptical.

Turing is basically trying to refute any notion of a fundamental human spirit or basic implicit element of humanity that can not be simulated externally. I am not convinced that a machine could be passionate about anything, or even successfuly simulate this externally. The particular combination of rationality and irrationality that inform human decisions can not be replicated through a machine. even if the machine were programmed to make irrational decisions some of the time, I am not convinced that the pattern of these decisions could ever be truly convincing.

The discussion of simulation in Hofstadter's dialog is essential to understanding Turing's definition of machine intelligence. He claims that labelling some processes as a simulation is vacuous, as it can be extended until everything can be dismissed as such. This hinges upon a machine's ability to learn, as direct modelling of all processes neccessary to create a sufficiently convincing simulation of a complex process is virtually impossible.

Even if a machine had the ability to augment itself with knowledge gained from its surrounding environment, it seems highly unlikely that such a machine could be given the neccessary inital state of a human. To completely simulate all biological systems which contribute to the learning of a child would be astounding. Moreover, providing a sufficiently rich environment in which the machine could learn seems as difficult as creating the machine in the first place.

To suggest that a machine could attain intelligence seems far more likely than claiming it could attain human intelligence. Any machine that could learn, and process these learnings into some sort of rationality could very well be classified as intelligent, but without human sensory experience, it would be nearly impossible for a machine to effectively simulate a human.

Turing claims that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." This is not true, nor will it be true int he forseeable future. The fundamental nature of humans as beings with vanity and a strong sense of exclusivity prevents this from happening. Turing tries to define intelligence in a way that does not depend upon any abstract notions of thought, but his definition will never be accepted by many. Our concept of thought is so intrinsically linked to our notion of intelligence that convincing many people of machine intelligence without machine thought (in the human sense) is next to impossible. Although Turing's hypothesis is convincing when considering intelligence in complete abstraction, it will not be part of "general educated opinion," for quite a while, if ever.


The articles by Turing (1950) and Hofstadter (1981) both discuss the seemingly simple question as to if machines can think. Spoken in different terms, can artificial intelligence ever emulate the processes involved in human thought? I think the reaction of most individuals would be a quick "no." As both articles point out, it is difficult to imagine a computer that has the same range of faculties that are in common with all human beings (e.g., emotions, propensity for errors, etc.) Although, using Turing's non-standard "definition" of thinking (i.e., the sex imitation game), it is much more reasonable to consider a computer as thinking. Using this new definition, we should care little about the underlying processes of the act of thinking itself and how well a machine emulates it. Rather, we should concern ourselves with just how well a machine can deceive a human judge in the imitation game. Nonetheless, although an interesting subject matter in its own, I think both articles overstate the debate over the issue of thinking machines. Specifically, I believe it is a useless endeavor to discuss about how closely computers can think like humans. I think the more useful question is "how useful are machines as a tool in understanding thinking?"

I think the sticky issues in cognitive modeling are illustrative of the point I'm trying to make. Cognitive models attempt to theoretically dissect thought processes into smaller digestible entities, often abstract in nature, in an effort to understand the processes themselves (e.g., short term and long term memory). However, most Cognitive Psychologists would never make the stretch and say a particular model really represents how the brain functions. Since the "real" processes in the brain are fairly opaque, cognitive models only give a rough theoretical approximation of the underlying processes involved. At least, that is, until evidence clearly disproves it or a more refined model arrives. Stated more bluntly, it is usually already accepted that a cognitive model is wrong or under-defined. The model is just a tool to help understand the complexities of the brain in a more comprehensible way.

I like to think machines programmed to think are much like the cognitive models. The point is not to necessarily consider how well machines can think like humans. As stated before, I think it's fairly obvious to most, at least at this time, that computers are not near approximating the full range of human thinking. Furthermore, the underlying processes involved in thinking are opaque. Therefore, it is ultimately futile to fully estimate how well a machine could emulate real human thinking. The futility is highly apparent in the heated discussion described in Hofstadter's paper. I think it is more important to consider how well the machine is as a tool in estimating real human thinking. The machine is never thought to be a real "thinking human" but rather a close, or sometimes not-so-close, approximation until a better machine comes along.


I believe that the Turing Test is a reasonable way to test for a certain level of intelligence. I would certainly consider a machine that could pass the Turing Test to be intelligent, but there are animals that many people would consider intelligent that cannot pass the Turing Test, necessitating the clarification that passing the Turing Test on a consistent basis in a variety of conditions does indicate intelligence but failure of the test does not preclude the possibility of intelligence. So, what is that level of intelligence necessary for us to be impressed enough to concede that machines have achieved-or rather, we have achieved in creating machines that have achieved-"intelligence"? From the talk of the Turing Test, it seems that we place a high degree of emphasis on the concept of creativity. While we may consider a human who processes data at a rapid pace (say, by doing arithmetic very quickly) to be intelligent, we do not consider this alone to be a sign of intelligence in a machine. Again, this suggests the idea that "intelligence" consists of many complex tiers of cognitive processing.

One interesting issue discussed in the second packet was whether intelligence can be separated from emotion. Along those same lines, I think a distinction needs to be made between the concepts of intelligence and consciousness, specifically self-awareness. I would consider self-awareness to be necessary in order for a being/creation to be able to experience emotion, but I would not consider it necessary for the existence of intelligence. It is probably necessary for certain levels of intelligence, but not for more rudimentary ones.

Why do we deny that machines are already intelligent? If we accept that on the most basic level, human intelligence is simply the result of mechanical processes, why can we not accept that we have already created machines with intelligence, albeit intelligence that is comparatively low next to human intelligence? This does, of course, take as an assumption the separation of intelligence and consciousness, since I would not argue that any machine currently in existence is actually self-aware. If machines have already achieved a degree of intelligence, maybe it would make sense to use an IQ test to determine whether they enter the range of human intelligence. As was suggested in the article, we are a long ways off of creating machines that could come close to taking the IQ test. Still, it seems a little more "solid" way of determining intelligence than the Turing Test, if only because it gives a hard score that can easily be used to make comparisons between people and machines or machines and other machines.


Alan Turing provided a very convincing argument for the existence of a "thinking" computer in his essay, Computing Machinery and Intelligence. The fact that it would be a convincing argument even today makes the article even more amazing because it was written in 1950. Despite living in an era with very rudimentary computers, Turing was able to ascertain something of a formula for a thinking computer.

It seems as if Turing assumes, however, that it would be a good thing to have thinking computers. This is simply not true for many today who would consider thinking computers threats to their humanity. These people also believe that evolution stopped when it reached humankind and believe for some reason or another that humankind is the peak of life. Why, other than foolish pride, are so many people so certain that we are the apex of life? There is simply no evidence to support a theory such as this. Evolution and natural selection have certainly taken on a different appearance from the past. Many can overcome genetic predispositions that would have interfered with breeding and survival even just 50 years ago. However, this is a type of evolution itself, for it becomes necessary to have the cure or treatment in every generation, a permanent change of sorts.

Because humankind will do anything to prolong and "improve" life, it only stands to reason that we will be using more and more cybernetic-implants until the distinction between man (or woman) and machine becomes so blurred it will be pointless to discuss. No one would think of denying their sick child the chance to live, and most then would use any cure available to them to help their child live a normal life. If this cure involved cybernetic implants, then the child would live with part of a machine living for them. This is certainly true for deadly maladies you might, but I would ask, why not any other illnesses or "disadvantages"? Where would it end? We have shown to be impulsive with our discoveries, so would this one be any different?

Though many reject it, a thinking computer is very possible. From a psychobiological perspective, what is a brain and thought more than a physical organ where chemicals carry out complex reactions creating emotion, will, consciousness, etc? If there is an existence of a soul, I believe the field to be muddled, but in a purely physical world, our consciousness is nothing more than neurons firing and chemicals being released in our brains; a little disconcerting, but logical.

Turing's argument is quite thorough and in depth. His form of argumentation is a common and effective one. He lines up the arguments against his case, and refutes them one by one. It is harder to find holes in any given idea when the arguer has brought up your evidence against his argument and turned it on its head before you could even say it. This does not mean that the argument is perfect, just harder to refute. For Turing, this was an amazing effort to write about thinking computers using almost theoretical terms. He was describing in detail the conceptual design of something that would not exist for years to come after his tragic death.


It seems that most people will agree that future machines could potentially `mimic' thinking. The question then is, does mimicking thinking really mean anything? If, as in the Turing test, one could not tell whether it was true thinking (human) or `mimicking' (machine) what does it really matter? One of the main arguments for this seems to be that thinking done by a machine doesn't really qualify as thinking because it doesn't get down to the `essence' of what thinking really is. Of course we think that we know what this essence is. And we like thinking that we're elite and that thought is a characteristic reserved for humans.

If we could replicate, atom for atom, the human brain, would it be able to think? People who believe that thinking is an inherently human characteristic object to this idea. They believe that the human brain is more than the sum of its parts, and therefore that even an exact physical replica would not produce the human `mind.' (intentional distinction between `mind' and `brain') This seems like a rather superior position to take, for it assumes that we were the only species endowed with the ability to think.

We don't really have any evidence that anyone thinks besides ourselves. Our belief that other humans think is based solely on external analysis and the assumption that all humans think. If we get rid of this assumption and are then presented with a machine which, to all outside observation, can think, then the belief that machines think will be just as valid as that of humans. So basically we define thought as a pattern that we can perceive externally, yet we believe that it only applies to humans. If all we have are external impressions, then a machine mimicking thought should be just as valid as human thought.

We generally associate intelligence with thought. Are machines capable of intelligence? If you were talking about crystal intelligence (Cattell), knowledge of objective facts, then I would say definitely yes. However, fluid intelligence (Cattell), which is an active process; the ability to make deductions, etc., is a different story. Intelligence in my mind necessitates the ability to make good decisions. Decisions based on objective facts bring us back to the idea about crystal intelligence. However, decisions about oneself necessitate awareness. These subjective decisions seem to require emotion. How does one make a decision about whether they would rather do x or y, without having some idea of what gives the greater reward, i.e. what makes them `feel' better. Are machines capable of emotion? This is something that I feel rather doubtful about at the moment, but I'm not convinced and could be swayed with a strong argument.

Another good point I think these articles make about intelligence is concerning motivation. If one of the qualifications of intelligence is being able to hold a conversation, then any intelligent subject will need to have motivation to continue to argue their point. In other words, they must be motivated to do so, and therefore interested in what they're talking about. Interest again requires emotion.

Does the inability to have emotion extinguish the possibility of thought? If we equate thought with fluid intelligence then it does. However, I don't think that thought and emotions are necessarily linked. I think you can have a machine capable of deduction based on fact alone, and that would be evidence of being able to think. Linking emotions and thought stems mostly from the fact that they are both inherent human characteristics and there is much thought that surrounds emotions. However, we were not asking whether machines were capable of consciousness.

This all brings us back to the real question which is how do we define thought? (or intelligence?) This is something that people don't really agree on, as seen from the coffeehouse conversation article. The Turing Test seems like an adequate way to test for the presence of thought. However, I don't necessarily think that thought is equivalent to genuine intelligence. One might ask how we can deduce anything by the conclusion of the Turing Test, since humans don't pass it all the time either. I think the real test is not whether a machine could pass the test, but just whether they could adequately play the game. This should reveal whether they're capable of thought. Genuine intelligence (fluid intelligence), however, is something that I believe requires an understanding of emotion, motivation and self-awareness, none of which the Turing Test will reveal. The capacity for these traits is also something I don't at the moment believe is possible for machines.

The Turing Test excels in the sense that it's relatively able to isolate thought. It takes away the impact of appearances, something which obviously influences opinion. It also gives the interrogator a good position from which to make his/her decision. If the test took only one subject, then the interrogator would have to decide whether he was persuasive or not persuasive. However, the use of two subjects allows for the decision to be made whether one subject is more persuasive than the other, which seems a more impartial test. However, the test also makes use of emotion, which might affect the outcome of the test. For the subject to be persuasive, they must react to the answers of the other subject. This frequently reveals emotions, especially if expressive characters such as exclamation points are permitted. This is another reason why I believe that the test should be whether the machine is capable of playing the game rather than actually convincing the interrogator.

Ultimately I think it's possible to create a machine capable of thought. However, I don't believe we will succeed in creating a conscious machine. This is not because I think that consciousness is inherently human, but because creating a conscious machine necessitates an understanding of the nature of consciousness that I don't think we will ever have.


I personally do not feel computers can "think." Of course, this depends on what "to think" means, but I do not feel they do or ever will think in the same manner as a human does. I believe that computers will be able, at some point, to feign intelligence and learn, but I do not think that they will be able to create, feel emotions, or ponder. I think that the Turing Test is a reasonably good test to measure artificial intelligence. If programmed extensively enough, a computer can mimic a human very well. The problem is that this computer will be programmed for the purpose of passing the Turing Test, and that will be it. The programmer will put the time and research into the test to find out what kind of questions are generally asked, etc, to make it pass against a new interviewer and participant every time. I feel, however, that if the interviewer and participant are experienced at the test, that they will always be able to beat the computer. There are just too many human characteristics for the participant to try to show, and too many questions for the questioner to ask to ever make it flawless. These types of things I do not know if computers will be able to pick up and learn. Additionally, I feel that emotion is tied to thought. Computers cannot think merely mimic the human thought process, and they will be able to do this only as well as a programmer can understand it himself. They will mimic emotion but not feel it. They will be able to sense but not understand.


The Turing test is not a perfect way to measure intelligence. If a computer ever does pass the Turing test it will be a magnificent milestone in computer development. However it would surely not end the debate as to whether machines are intelligent. The Turing test only measures the intelligence of a persons reply to a question. There is more to human intelligence than the ability to deceive an interrogator. I am most persuaded by three particular challenges to add to the Turing test which are tests of creativity, irrationality, and intuition.

The creativity objection was brought up by Lady Lovelace's argument that a machine could never produce anything truly or original. A machine seems limited in that all of its originality was preset by a programmer. A human being on the other hand could create something, seemingly without any program determining how they will be creative. I acknowledge that a computer could produce new and original work. I don't see how a computer could, without a specific program, create a new rule to follow. For example a computer could write a poem if it was programmed to do so, but if it was only programmed with the ability to write could it then create a new form of expression such as poetry? The creative power of humans however is much more original, while computers "creativity" is more contrived.

The irrationality of human intellect is a fundamental way that it operates. Logic is an intentional way of thinking about a problem. Logic is not completely natural to a person. There are logical fallacies that the human mind consistently makes. A computer could be taught how to make these fallacies, but not how to be irrational in the same way unless the computer too felt emotions. A person often uses logic to justify and rationalize their emotions; a computer however would be using logic and programs to create emotions that it would seem to rationalize. This backwards functioning of a computer may be able to fool a person, but it is not genuine.

The final challenge to the Turing test is, "Can a machine make intuitive leaps?". The human intellect has an unexplainable ability to reach a conclusion intuitively. This is different than the discussion of serial limits at the end of Turing article. I am not talking about heuristics which people use to reach an answer more quickly and more often inaccurately, than a serial list. I'm talking about the more peculiar phenomenon of being able to walk away from a problem and then return hours later, when the answer suddenly pops into mind. This is more than unconscious computation because the answers are not always attainable through a serial check. For example a song lyric may simply pop into your mind, or for that matter an new invention. The intuitive leaps of the human mind are a very important part of out intellect.

These three tests are not satisfied by the Turing test, but they illuminate other important aspects of human intellect. I would be more convinced of a machines ability to think if it could pass all these tests.