Read Alan Turing: The Enigma Online
Authors: Andrew Hodges
Tags: #Biography & Autobiography, #Science & Technology, #Computers, #History, #Mathematics, #History & Philosophy
This was the background to a formal discussion on ‘The Mind and the Computing Machine’
33
held in the philosophy department at Manchester on 27 October 1949. Just about everyone in British academic life with a view to express had been assembled. It began with Max Newman and Polanyi arguing about the significance of Gödel’s theorem, and ended with Alan discussing brain cells with J.Z. Young, the physiologist of the nervous system. In between, the discussion raged through every other current argument, the philosopher Dorothy Emmet chairing. ‘The vital difference,’ she said during a lull, ‘seems to be that a machine is not conscious.’
But such a use of words would satisfy Alan no more than would Polanyi’s assertion that the function of the mind was ‘unspecifiable’ by any formal system. He wrote up his own view, which appeared as a paper,
Computing Machinery and Intelligence
,
34
in the philosophical journal
Mind
in October 1950. It was typical of him that the style he employed in this august journal was very little different from that of his conversation with friends. Thus he introduced the idea of an operational definition of ‘thinking’ or ‘intelligence’ or ‘consciousness’ by means of a sexual guessing game.
He imagined a game in which an interrogator would have to decide, on the basis of written replies alone, which of two people in another room was a man and which a woman. The man was to deceive the interrogator, and the woman to convince the interrogator, so they would alike be making claims such as ‘I am the woman, don’t listen to him!’ Although pleasantly recalling the secret messages that might be passed in his conversations with Robin and Nick Furbank, this was in fact a red herring, and one of the few passages of the paper that was not expressed with perfect lucidity. The whole point of this game was that a successful imitation of a woman’s responses by a man would
not
prove anything. Gender depended on facts which were
not
reducible to sequences of symbols. In contrast, he wished to argue that such an imitation principle did apply to ‘thinking’ or ‘intelligence’. If a computer, on the basis of its written replies to questions, could not be distinguished from a human respondent, then ‘fair play’ would oblige one to say that it must be ‘thinking’.
This being a philosophical paper, he produced an argument in favour of adopting the imitation principle as a criterion. This was that there was no way of telling that other
people
were ‘thinking’ or ‘conscious’ except by a process of comparison with oneself, and he saw no reason to treat computers any differently.
*
The
Mind
article largely took
over what he had said in his NPL report, which had not, of course, been published. There were, however, some new developments, not all very serious. One was the joke of a proud atheist who refused to be the Responsible Scientist expected by Downside Abbey. He gave a tongue-in-cheek demolition of what he called the ‘Theological Objection’ to the idea of machines thinking, which concluded that thinking might indeed be the prerogative of an immortal soul, but then there was nothing to stop God from bestowing one upon a machine. More ambiguous in tone was a reply to an objection ‘from Extra-Sensory Perception’. He wrote that
These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be the first to go.
Readers might well have wondered whether he really believed the evidence to be ‘overwhelming’, or whether this was a rather arch joke. In fact he was certainly impressed at the time by J .B. Rhine’s claims to have experimental proof of extra-sensory perception. It might have reflected his interest in dreams and prophecies and coincidences, but certainly was a case where for him, open-mindedness had to come before anything else; what
was so
had to come before what it was convenient to think. On the other hand, he could not make light, as less well-informed people could, of the inconsistency of these ideas with the principles of causality embodied in the existing ‘laws of physics’, and so well attested by experiment.
The idea of ‘teaching’ the machine had also progressed since 1948. By now he had probably learnt by trial and error that the pain and pleasure method was appallingly slow, and had worked out a reason why, which cast a look back to Hazelhurst:
The use of punishments and rewards can at best be a part of the teaching process. Roughly speaking, if the teacher has no other means of communicating to the pupil, the amount of information which can reach him does not exceed the total number of rewards and punishments applied. By the time a child has learnt to repeat ‘Casablanca’ he would probably feel very sore indeed, if the text could only be discovered by a ‘Twenty Questions’ technique, every ‘NO’ taking the form of a blow. It is necessary therefore to have some other ‘unemotional’ channels of communication. If these are available it is possible to teach a machine by punishments and rewards to obey orders given in some language,
e.g
. a symbolic language. These orders are to be transmitted through the ‘unemotional’ channels. The use of this language will diminish greatly the number of punishments and rewards required.
It was a nice touch of self-reference to bring in
Casabianca
, for the boy on the burning deck, executing his
orders mindlessly, was like the computer. He went on to suggest that a learning machine might achieve a ‘supercritical’ state when, in analogy with the atomic pile, it would produce more ideas than those with which it had been fed. This was essentially a picture of his own development, stated rather more seriously than in 1948, and a claim that even his own originality must somehow have been determined. Perhaps he was thinking of his series for the inverse tangent function, and the law of motion in general relativity, when he first began to put things together in his mind. This again, was not a new idea. Bernard Shaw had argued it thus in
Back to Methuselah
, when Pygmalion produced his automaton:
ECRASIA: Cannot he do anything original?
PYGMALION: No. But then, you know, I do not admit that any of us can do anything really original, though Martellus thinks we can.
ACIS: Can he answer a question?
PYGMALION: Oh yes. A question is a stimulus, you know. Ask him one.
Much of what Alan wrote was a justification of Pygmalion’s argument, which Shaw, champion of the Life Force, had derided.
This time he also offered a very carefully phrased prophecy, made deliberately rather than off the cuff to newspaper reporters.
I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10
9
, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
These conditions (‘average’, ‘five minutes’, ‘70 per cent’) were not very demanding. But it was most important that the ‘imitation game’ would allow questions about anything whatever, not just about mathematics or chess.
It reflected his all-or-nothing intellectual daring, and it came at an appropriate moment. A first generation of pioneers in the new sciences of information and communication, people like von Neumann, Wiener, Shannon, and pre-eminently Alan Turing himself, who had combined broad insights into science and philosophy with the experience of the Second World War, was giving way to a second generation which possessed the administrative and technical skills to build the actual machines. The broad insight, and the short-term skill, had little in common – that was one of Alan’s problems. This paper was something of a swan song for the primal urge, bequeathing the original excitement to the world before it was
submerged in mundane technicalities. As such it was a classic work in the British philosophical tradition. It was a gentle reproof to the ponderous essays by Norbert Wiener, as well as to the reactionary, ‘soupy’ trend of English culture in the late 1940s. Bertrand Russell admired it, and his friend Rupert Crawshay-Williams wrote appreciatively to Alan of how much Russell and he had enjoyed reading it.
35
From a philosophical point of view, it could be said to fit in with Gilbert Ryle’s
The Concept of Mind
, which had appeared in 1949, and which put forward the idea of mind not as something added to the brain, but as a kind of description of the world. But Alan’s paper proposed a
specific
kind of description, namely that of the discrete state machine. And he was more the scientist than the philosopher. The point of his approach, as he stressed in the paper, was not to talk about it in the abstract, but to try it out and see how much could be achieved. In this he was the Galileo of a new science. Galileo made a practical start upon that abstract model of the world called physics; Alan Turing upon that model provided by the logical machine.
Alan himself would have liked the comparison: he made reference in the article to Galileo incurring the displeasure of the church and the format of his ‘Objections’ and ‘Refutations’ was one of a trial. A year or so later he gave a talk
36
on this subject subtitled ‘A Heretical Theory’. He liked to say things like: ‘One day ladies will take their computers for walks in the park and tell each other “My little computer said such a funny thing this morning!”,’ to destroy any sort of sanctimonious forelock-touching to the ‘higher realms’. Or, when asked how to make a computer say something surprising, he answered ‘Get a bishop to talk to it.’ In 1950 he was hardly likely to be on trial for heresy. But he certainly felt himself up against an irrational, superstitious barrier, and his predisposition was to defy it. He continued:
I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.
Science, to Alan Turing, was thinking for himself.
Untarnished by all the trials and errors surrounding the actual computer installations, sprang out this ‘conjecture’: the achievement by the millennium of something approaching the artificial intelligence that had long been expressed in the myth of Pygmalion. Also emerging fully-formed was the fruit of his thought since 1935 on the discrete state machine model, on universality, and the constructive use of the imitation principle to ‘build a brain’.
Nonetheless, beneath the assertive surface of the paper lay probing, needling, teasing questions. For this was not tunnel vision. Unlike so many scientists, Alan Turing was not trapped within the narrow framework within which his ideas were formed. Polanyi was keen on pointing out the different models employed by the different branches of scientific enquiry, and the importance of distinguishing them. But Edward Carpenter had gone to the heart of the matter long before:
37
The method of Science is the method of
all mundane knowledge; it is that of limitation or actual ignorance. Placed in face of the great uncontained unity of Nature we can only deal with it in thought by selecting certain details and isolating those (either wilfully or unconsciously) from the rest.
To model the activity of the brain as a ‘discrete controlling machine’ was a good example of ‘selecting certain details’, since the brain could, if desired, be described in many other ways. Alan’s thesis was, however, that this was the model
relevant
to what was called ‘thinking’. As he said a little later,
38
in a parody of Jefferson’s argument, ‘We are not interested in the fact that the brain has the consistency of cold porridge. We don’t want to say “This machine’s quite hard, so it isn’t a brain, and so it can’t think”.’ Or as he wrote in this paper,
We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant. The ‘witnesses’ can brag, if they consider it advisable, as much as they please about their charms, strength, or heroism, but the interrogator cannot demand practical demonstrations.
There could be arguments about his thesis
within
this model, or there could be arguments
about
the model. The discussion of Gödel’s theorem was,
par excellence
, one which accepted the model of a logical system. But alive to the philosophy of science, Alan discussed the validity of the model itself. In particular, there was the fact that no physical machine could really be ‘discrete’:
Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machine which can profitably be
thought of
as being discrete-state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them.