An informatics expert says that humanity has nothing to fear from smart machines but the real threat comes from incompetent machines that can screw up things.
In an article written for Communications of the ACM, Alan Bundy, a professor of automated reasoning at the School of Informatics, University of Edinburgh, Scotland, said that concerns had been expressed widely that artificial intelligence poses a threat to humanity.
"The fear is that these super-intelligent machines will pose an existential threat to humanity, for example, keep humans as pets or kill us all – or maybe humanity will just be a victim of evolution," Bundy wrote.
But, he said, this was based on a false and over-simplified understanding of intelligence.
{loadposition sam08}Bundy pointed out that the expertise exhibited by AI systems tended to be very high in narrow areas. He cited the examples of Deep Blue (the chess-playing computer), Tartan Racing (a self-driving car), Watson (IBM's question answering system) and AlphaGo (a Go playing program).
In each case, the systems were narrowly focused to a single task.
"I am not attempting to argue that AI general intelligence is, in principle, impossible. I do not believe there is anything in human cognition that is beyond scientific understanding," Bundy said.
"With such an understanding will surely come the ability to emulate it artificially. But I am not holding my breath. I have lived through too many AI hype cycles to expect the latest one to deliver something that previous cycles have failed to deliver.
"And I do not believe that now is the time to worry about a threat to humanity from smart machines, when there is a much more pressing problem to worry about."
Bundy said the problem was that humans tended to ascribe too much intelligence to AI systems that were narrowly focused.
"Any machine that can beat all humans at Go must surely be very intelligent, so by analogy with other world-class Go players, it must be pretty smart in other ways too, mustn't it? No!" he said.
He said the malfunctioning of an individual robot or automated system might post danger to humans, and not the whole of humanity. But he cited the case of the Reagan-era Star Wars initiative as one that could have sent the whole world up in flames due to an error.
"I was among many computer scientists who successfully argued that the most likely outcome was a false positive that would trigger the nuclear war it was designed to prevent."
Bundy said as AI progresses, "we will see even more applications that are super-intelligent in a narrow area and incredibly dumb everywhere else".
"The areas of successful application will get gradually wider and the areas of dumbness narrower, but not disappear. I believe this will remain true even when we do have a deep understanding of human cognition."