Artificial Intelligence then, now, tomorrow
(A speech given at the Arthur C Clarke Centennial on the 16th of December, 2017, Colombo)
BOWMAN Hal, give me manual hibernation control. HAL Have you decided to revive the rest of the crew, Dave? PAUSE. BOWMAN Yes, I have. HAL I suppose it's because you've been under a lot of stress, but have you forgotten that they're not supposed to be revived for another three months. BOWMAN The antenna has to be replaced. HAL Repairing the antenna is a pretty dangerous operation. HAL If you're determined to revive the crew now, I can handle the whole thing myself. There's no need for you to trouble. BOWMAN I'm going to do this myself, Hal. Let me have the control, please. HAL Look, Dave you've probably got a lot to do. I suggest you leave it to me.
That’s HAL 9000, from Arthur C Clarke’s most famous novel: 2001 – A Space Odyssey. I’ve actually taken it from the script, because it’s easier to speak it out loud.
The novel explains that HAL is unable to resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that HAL withhold from Bowman and Poole the true purpose of the mission. With the crew dead, HAL reasons, he would not need to be lying to them. HAL fabricates the failure of a communicator unit so that their deaths would like accidents.
Back then, when Sir Arthur wrote these words, the world was on the verge of a new scientific dawn. AI. The thinking machine.
In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think. He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.
And this hope continued: in the 1956’s Dartmouth conference, computer scientists like Marvin Minsky, John McCarthy, Claude Shannon and Nathan Rochester, Allen Newell and Herbert A. Simon, people who would go on to become experts in this field, were really hopeful that we’d very soon have computers sitting down and having conversations with us.
And Clarke’s depiction of HAL 9000 may have seemed pessimistic in the context of those times.
The concept of humans being trapped or enslaved by their own technology, however, is not recent, especially if you abstract it one level. We’ve always had, in mythology, the story of the Creators ultimately being toppled by the Created. Cronos versus Zeus. Gods against mortals. Humans versus the technology that we created.
As the interest in AI rapidly died – because it turned out building AI was a hell of a lot more difficult than they imagined – the Creator versus Created story took over. Skynet. The Matrix. Many others.
This is a story.
I love stories. I’m a writer. But let’s step away from the story a bit and consider what we really have in the world right now.
The dream of thinking machines died in the 70’s. It was so much work, and so much of the theory had yet to be perfected, and even if the theory was there, the hardware wasn’t.
But in 1997, Gary Kasparov, the world’s reigning chess champion, was defeated by IBM’s Deep Blue. Kasparov was absolutely the best, by the way. Many computers had tried to defeat him – twelve years ago, he played against 36 computers simultaneously and won each game – but in Kasparov’s own words, nobody remembers the people who failed to climb Mount Everest: all we remember is the first man to do it.
Kasparov was Everest. And Deep Blue reached the summit. And the world exploded. Headlines called it THE BRAIN’S LAST STAND. And immediately, everyone picked up on that dream again.
But if you look at Deep Blue – Deep Blue wasn’t a person. Was it intelligent? In the context of a chess grandmaster, yes. But you couldn’t have a conversation with it. It couldn’t discuss hopes and dreams. It couldn’t think outside of chess.
And if you look at the world today, that’s true for every single AI system we have right now. Experts call this NARROW AI: machine systems that are optimized to do one task, often better than a human can – but outside of that one task, they’re useless. They’re specialized. Machine learning techniques have improved how we build these AI – we’ve gone from writing rule systems to setting goals and letting artificial software brains figure out how to get to those goals – but they’re largely Deep Blues.
Your self-driving taxi is not at a state where it can have a conversation about life, bitch about Parliament, and grumble about guys in BMWs. Siri and Cortana are just chatbots with search engines. They are far better at that one task than humans, but they’re one-task creatures with set limits.
The ultimate goal, of course, is still the Big one – WIDE AI – general purpose intelligence. But the fact of the matter is right now, if you want a general purpose intelligence, it’s still cheaper and faster to have a baby.
Now here’s the thing. The Creator versus Created story is even more powerful now within these narrow domains. Drivers losing jobs, robosurgeons taking over hospitals, drones instead of human pilots – in each narrow domain, barring a few, we are going to see robots that are far better at that than a human is.
But what is a human? What makes us special?
We’re general purpose.
Think about it. We’re not the fastest animal, or the most durable, or have good vision, or hearing. But what we do is we make tools. We give ourselves specializations that we can swap out at will. A sabretooth tiger will kill an early human, but a human with ten thousand years of civilization, research and a machine gun will kill any number of tigers. We make tools that let us fly higher and faster and further than birds, dig deeper than moles – you get the picture.
And in AI right now, what we have is a tool. I don’t see a future where we let our tools run away from us. We might launch a self-driving hive mind that can pilot every single car in the country, but we’re not going to let it run for President. HAL 9000 will pilot that spacecraft, but we are not going to put it in a command position now where it can think of killing off humans.
So the best way to look at how AI is going to affect us is to disregard the story and look at what happened in chess. Gary Kasparov, after being beaten, thought: if I play WITH a computer instead of AGAINST a computer, would that be the most perfect game of chess ever played? So he launched a sport called Advanced Chess, where humans can use computers to aid them.
In 2005, a free-for-all chess tournament was held, worldwide. Many chess engines and grandmasters participated, but the winners were a pair amateur chess played from the US who used three PCs running chess engines. Together, the human-machine combo was better than both the best machines and the best humans. Human insight, machine crunching power.
So that’s what the future is going to look like. Amateur human plus professional AI is going to be better than a professional AI or professional human by themselves. We’ll have doctors who can do better surgery with less training; taxi drivers who’ll drive better from day one; even spaceship pilots. Especially spaceship pilots. If we do end up creating general-purpose AI – a new form of life – I don’t think we will give it enough power to kill off the human race. The actual process might look closer to integration than to two sides fighting.
This is not to say that AI cannot cause damage. Damage will be done by accident. Training data gone wrong. Self-driving systems deciding – do we make a choice to kill one person, or continue and have an accident that might kill five? This will happen. Part of our work at Lirneasia is trying to figure out how, why, and how we might slow that damage a little bit.
And in our future of human plus machine, it won’t be human versus machine: it will be human plus machine versus human plus machine. Imagine Donald Trump paired an AI wartime general against whoever’s running ISIS right now, with their own AI. AI will be the new machine gun.
That’s a terrifying thought. And it will likely happen.
Now I spoke to CD Athuraliya, a friend and colleague of mine who runs an AI startup, and meets AI folks from Google and IBM and they discuss the hard tech. That’s his thing. CD’s view is that at some point, machines will replace humans.
I’m a little more optimistic. I think we will see machine plus human; and we will have close and close integration of narrow AI and general purpose human minds until we look back and we’re substantially different from homo sapiens. We’ll have a choice at that point: are we a new type of AI, or are we a new type of human?
Technological obsoletion is going to happen. That’s inevitable. This is not going to stop.
But that’s a question for the far future. And I believe that we will solve it when we get there. I don’t think humans will let ourselves be written out that easily: I believe that at some point we’ll realise we can’t beat em, and we’ll just join em.
Kasparov himself points out a fantastic example: an African American folk hero called John Henry. John Henry was a steel-driving man – hammering a steel drill into rock so you could put the explosives in it and blast it. Henry was the best. As the story goes, Henry was challenged to win against a steam-powered hammer; John Henry won, but died out of exhaustion with the hammer still in his arms.
Kasparov, who is the John Henry of our times, didn’t die. He got beaten by a machine, and instead joined machine intelligence to human. So the question I put to you is: are you John Henry, or are you Kasparov? Because the future is coming. It might not be HAL 9000 or Skynet, but it will happen.
And at that moment, you have a choice: you can either die because of the steam hammer, or be the one with the steam hammer.