Nick Bostrom TED Talk

27 April 2015

Today, a TED talk by FHI Director and CSER Advisor Professor Nick Bostrom went online. In his presentation, What Happens When Our Computers Get Smarter Than We Are? Bostrom reviewed the possible consequences of reaching human-level artificial intelligence, and some considerations for safety strategies.


Here is an excerpt, in which he describes how hard he would expect it to be to reach different levels of intelligence:

“Most people, when people think of what is smart, and what is done, I think have in mind, a picture roughly like this. On one end, we have the village idiot and then far over at the other side, we have Ed Witten or Albert Einstein, or whoever your favourite guru is. But I think that from the point of view of artificial intelligence, the true picture is probably more like this. AI starts off at zero intelligence. And after many years of really hard work, maybe eventually we reach mouse-level intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many more years of really hard work, lots of investment, maybe we get to chimpanzee-level artificial intelligence. And then, after even more years of really hard work, we get to village idiot-artificial intelligence. And, a few moments later, we are beyond Ed Witten. The train doesn’t stop at humanville station. It’s likely, rather to swoosh right by.”

Despite his concern about a speedy transition, Bostrom conveys a relatively positive outlook:

I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.

This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.

And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.

Subscribe to our mailing list to get our latest updates