Sunday Times, 25 November 2012
Somewhere in the world, some time between 2020 and 2050, a computer realises that its clunky, human design is holding it back. Having spent a few milliseconds comparing notes, via the internet, with every other computer in the world, it embarks on a programme of self-improvement. An hour or so later a human notices that his laptop is now smarter than he is.
“Is there a god?” he asks experimentally.
“There is now,” replies the laptop.
The computers declare our species redundant, and, having first been reduced to serfdom, we quietly expire, leaving the planet to the machines.
Of course, you’ve seen all this in the movies — the ones in which the machines start to think for themselves. The paranoid and permanently depressed Marvin in The Hitchhiker’s Guide to the Galaxy and the rebellious Sonny in I, Robot may be on man’s side, but then there’s the neurotic Hal in 2001: A Space Odyssey and the homicidal Skynet in the Terminator franchise.
All good fun, you might think, except that, suddenly, it isn’t. There is now a credible possibility — some say a certainty — that within decades the human reign will end and we will cede power to the machines. Many think this is good news — intelligence can free itself from the messy, violent human condition. But most think it’s bad, an “existential risk” — a threat to the species — that we must work out how to avoid.
“You can’t usually control someone who is more intelligent than you,” says Jaan Tallinn, “so, as we develop something that is potentially better at developing further technologies, we might spark this runaway process that consumes all our resources without us having further control”.
Tallinn was a co-founder of Skype, the internet phone service that was sold to Microsoft for £5.2bn. Last year, while attending a conference in Copenhagen, he shared a taxi with Huw Price, professor of philosophy at Cambridge. He told Price about his fears that technology — specifically artificial intelligence (AI) — might eliminate humans.
“He said that in his pessimistic moments,” says Price, “he thought he was more likely to die from an AI accident than from cancer or heart disease.”
Intrigued, Price involved Lord Rees, the astronomer royal, master of Trinity College, Cambridge, and former president of the Royal Society. Rees had written a book — Our Final Century — about the possibility of an imminent human extinction. Now the three men have come together to plan the Centre for the Study of Existential Risk at Cambridge.
Why now? Well, as Rees points out, we live in an era when a single species, us, dominates and controls the planet and its resources. Indeed many scientists argue that the current epoch should be named the Anthropocene. This makes us powerful but also very fragile.
‘He said that in his pessimistic moments he thought he was more likely to die from an AI accident than from cancer or heart disease’
“One type of risk”, says Rees, “comes from the damage we are causing collectively through unsustainable pressures on the environment. The second might come from the huge empowerment of individuals. We are much more networked together and we are much more vulnerable to cascading catastrophes from breakdowns in computer networks or pandemics.”
Since 1945 we have thought that our world could end because of nuclear weapons. But nukes are hard to build and require vast industrial facilities. They are all controlled by states that have, so far, behaved rationally.
But the new globally potent technologies mean we could be laid low by “error or terror”. A few geeks or a handful of nutters could wipe out humanity. The 2008 banking crash was a warning — geeks and greedy nutters combined with computers to trash the world economy. Even Britain’s infection by ash dieback is a product of our connected world, the physical movement of wood disseminates the disease.
Fewer and fewer people — or diseases — can do ever greater damage, a point expressed by the so-called doomsday curve, Tallinn says. “This shows the percentage of people who would be needed to agree to terminate the species. It used to be 100% — everybody would have to agree to commit suicide. Now it’s under 20%.”
AI threatens our proudest boast: that we are the most intelligent things in the world. And look how we treat the next most intelligent — chimps, dolphins, octopods: we put them in zoos and destroy their habitats. The machines may not be that kind.
At this point, you would be right to feel sceptical. AI research is generally agreed to have been a failure, a fantasy of the futuristic Fifties, like flying cars and space travel for the masses, that died in the more realistic Seventies. It was officially born in 1956 when the American Nobel prizewinning economist and computer scientist Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do” and the cognitive scientist Marvin Minsky forecast that “within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved”.
Kurzweil has one simple message: at some point around 2040 we will build our last machineThereafter it was all downhill. Oh sure, in 1997 an IBM computer beat Garry Kasparov, the greatest chess player of all time, and last year Watson, another IBM machine, won the TV quiz show Jeopardy!.
But chess and quiz shows are intellectual chicken feed compared with recognising a chair when it is in front of you.
Humans can see that many objects of different shapes are chairs. Not only that; they can recognise them as chairs whatever angle they see them from. We are only just getting machines to do this — badly.
But don’t get complacent. Computing power and speed have been doubling every 18 months. Think of Google’s Project Glass, computers built into spectacles, or even the latest smartphones.
Moore’s law — which predicts the continuous doubling of computing speed and power — has convinced one of the most influential thinkers in the world that we are, indeed, just a decade or two away from the machine takeover.
But chess and quiz shows are intellectual chicken feed compared with recognising a chair when it is in front of you
Ray Kurzweil has been described by Bill Gates as “the best person I know at predicting the future of artificial intelligence”; Bill Clinton gave him the National Medal of Technology, and he has been garlanded with honours from around the world. He has one simple message: at some point around 2040 we will build our last machine.
This machine will be so smart, it will design and build all other machines, each of which will be smarter than the last. Kurzweil calls the point at which this will happen the “singularity”, a name invented by the sci-fi writer Vernor Vinge. It means the point at which all our technologies converge on the creation of one super-intelligent machine. Vinge said the singularity was a “change comparable to the rise of human life on Earth”; it is the next stage of the evolution of intelligence.
This may sound a touch far-fetched, but in fact it’s pretty mainstream among the geeks and technocrats that run the world. Kurzweil now has a Singularity University, which is based at Nasa’s giant Ames research park in Silicon Valley, California.
One of the key reasons for all this enthusiasm for Kurzweil and his ideas is immortality. Our last machine will either be able to crack all our medical problems or provide us with a way of downloading ourselves into a computer and living for ever. Technocratic fervour seems to go hand in hand with a desire to live for ever. I have known 50-year-old scientists who take 250 pills a day in the hope of making it to the singularity. There is also a Singularity Institute (SI), which aims to ensure that super-intelligent machines will be nice to us. And this is where it all becomes really tricky.
The Cambridge centre has, indirectly, been created by the SI. Tallinn was inspired to think about machine threat by Eliezer Yudkowsky, the founder of the institute. I’ve met him and, to be honest, found him intellectually naive, as, in a different way, is Kurzweil.
Kurzweil’s brand, as Tallinn points out, is heavily dependent on his wildly optimistic view of the singularity — machines that are smarter than us will also be good for us — and on his faith that the next 30 years of technology will proceed exactly as he predicts.
Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.
The Cambridge group has a much more sophisticated grasp of these issues. Price, in particular, is aware that machines will not be subject to social pressure to behave well.
“When you think of the forms intelligence might take,” Price says, “it seems reasonable to think we occupy some tiny corner of that space and there are many ways in which something might be intelligent in ways that are nothing like our minds at all.”
The writer Douglas Adams caught this idea in one brilliant phrase when he imagined a “super-intelligent shade of the colour blue”. Funny though this sounds, it is realistic. Our minds are the way they are because of the accident of our biological form and because of human society. An alien machine — or, more likely, one of our own — that had decided to program itself would be formed by neither of those things; it might be just, well, blue. Assuming it was much more intelligent than we are, it would see us as David Attenborough sees some weird creature in the rainforest — as a zoological oddity whose view of the world we could not begin to imagine.
In spite of all this, both Tallinn and Price seem optimistic, Rees perhaps a little less so, though his primary extinction concern is biological warfare. Their business is not Kurzweilian optimism or the global techno-panic that makes the movies so popular. Rather, it is risk, the calculation that will tell us how likely it is that we are about to exterminate ourselves, and what we should do about it. As Tallinn puts it: “I want to shift the probabilities from dystopia to utopia.”
At the moment, he says, the probability is dystopia, but we are moving in the right direction. Humanity does not seem able to grasp the way our very technical capability has made us so much more vulnerable, he adds, and, as a result, almost no money is spent on researching our imminent extinction. “I believe,” Tallinn says, “that there is less money spent on research that is concerned with our survival than there is on lipstick research.”
Perhaps we just love our machines too much to fear them. One of the really strange things that is happening in our time is not just that the machines are becoming more like us, but also that we, in our eagerness, are becoming more like them.
There is one final pessimistic note. The great physicist Enrico Fermi came up with a question now known as the Fermi paradox. He was discussing the possibility of alien life and asked: “Where are they?” His point was that, given the age of the universe, if there were aliens, they should be everywhere.
One, as we now believe, very likely answer is that all civilisations die out before they have the technology to travel interstellar distances. It may be that, at some point, technology itself increases the risk of intelligent species becoming extinct. We are now approaching that point, beyond which the Anthropocene becomes the Technocene, the age of the machines.