We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot? In 1950, computer pioneer and wartime codebreaker Alan Turing made one of the most influential attempts to tackle this issue. In a landmark paper, he suggested that the vagueness could be taken out of the question of human and machine intelligence with a simple test. This “Turing Test” assesses the ability of a computer to mimic a human, as judged by another human who could not see the machine but could ask it written questions.
In the last few years, several pieces of AI software have been described as having beaten the Turing Test. This has led some to argue that the test is too easy to be a useful judge of artificial intelligence. But I would argue that the Turing Test hasn’t actually been passed at all. In fact, it won’t be passed in the foreseeable future. But if one day a properly designed Turing Test is passed, it will give us cause to worry about our unique status. The Turing Test is really a test of linguistic fluency. Properly understood, it can reveal the thing that is arguably most distinctive about humans: our different cultures. These give rise to enormous variations in belief and behaviour that aren’t seen among animals or most machines. And the fact we can program this kind of variation into computers is what gives them the potential to mimic human abilities. In judging fluent mimicry, the Turing Test lets us look for the ability of computers to share in human culture by demonstrating their grasp of language in a social context.
Turing based his test on the “imitation game”, a party game in which a man pretended to be a woman and a judge tried to guess who was who by asking the concealed players questions. In the Turing Test, the judge would try to guess who was a computer and who was a real human. Unsurprisingly, in 1950, Turing didn’t work out the necessary detailed protocol for us to judge today’s AI software. For one thing, he suggested the test could be done in just five minutes. But he also didn’t work out that the judge and the human player had to share a culture and that the computer would have to try to emulate it. That’s led to lots of people claiming that the test has been passed and others claiming that the test is too easy or should include emulation of physical abilities.