CSER features on The Science Network

11 November 2014

The Science Network has uploaded a discussion on existential risk. Alongside CSER founder Martin Rees, this discussion features professor of supercomputing Larry Smarr and neuroscientists Roger Bingham and Terrence Sejnowski. In this discussion, they discuss some of the philosophy of CSER. It’s a good example of how eminent scientists from diverse fields are now assembling to discuss the importance of risks associated with accelerating technological change. Here are some of their transcribed comments.

Terrence Sejnowski: What we’re striving at is big data and the fact that we can now use computers that are so powerful to sift through all of the data that the world has produced and the internet and to extract from that regularities and knowledge, and that’s called learning in the case of biological systems. We’re taking in a huge amount of data. Every second is like a fire-hose and we’re sifting through that very efficiently and pick out the little bits of data and incorporate that into the very architecture and the very hardware of our brain and how that is done is something that neuroscientists today are just starting to get to the mechanisms underlying memory, where in the brain and how that’s done biochemically but once we’ve understood that, once we know how the brain does it, it’s going to be a matter of just a few years before that’s incorporated into the latest computer algorithms and already something called Deep Learning has taken over the searches at Google for example for speech recognition and object recognition and images. And I’m told that Google Maps is now using that to help you navigate in terms of where you are – what street you’re on. It’s already happening, and Larry’s right about that. It reminds me of a cartoon I once saw, A Disney movie about the sorcerer’s apprentice, and this was about an apprentice who was tasked with cleaning the sorcerer’s apartment. And he managed to get a magic wand which would create a broom that would help him. Unfortunately, he didn’t know how to turn it off, and so it kept doubling, until the room was flooded. And the problem is that we’re like the sorcerer’s apprentice in the sense that we’re creating the technology but at some point we will lose track of how to control it, or [it could] fall into dangerous hands. But I think the real immediate threat is just not knowing what the consequences are. Unintended consequences that are already happening that we’re beginning to glimmer for example the privacy issue of who has access to your data. That’s something that still hasn’t been settled yet. The problem is that it takes time to explore and settle these issues and make policies and it’s happening too quickly for us to be able to do that.

Larry Smarr: ‘I think that’s the real issue. The timescale. Everybody’s concentrating right now on the Ebola virus and its spread. And you think, we’ll come up with some way to stop it. Maybe it won’t be easily transmitted… we became aware of [HIV] in 1980, by 1990, we were spending a million dollars a year and many of the best scientists in the world started working on the problem. That’s 30 years ago. And the number of people with HIV is just now starting to peak out, 30 years later. Well, do we have 30 years with a billion dollars a year and all of the best brains on the planet to work on these projects? Do we have 30 years to get on top of some of these things, and that is what I’m concerned about, is that by the time we see some of these beginning to happen, which are, let’s just say, antisocial, either because of the inability of our government to come to any kind of decision about anything, or regulatory timescales… but what if the best minds on the planet, with the best will from society to get the solution to the problem, don’t have time enough to do it.

Martin Rees drew attention to how taking a longer perspective can affect one’s perception of these risks.

Martin Rees: As astronomers, we do bring one special perspective, which is that we’re aware of the far future. We’re not aware merely of the fact that it’s taken four billion years for us to evolve but that there are four billion years ahead of us on Earth, and so human life is not the culmination, more evolution will happen on Earth and far beyond, and so we should be concerned, not just with the effects, and the downsides that using these technologies will have on our children and grandchildren, as it were, but with the longer-term effects they have, and of course there’s a big ethical issue to what extent you should weigh in the welfare of people not yet born. Some people say that you should take that into account, and if that’s the case, then that’s an extra motive for worrying about all these long-term things like climate change which will have their worst effects more than a century from now. ‘If you’re a standard economist and you discount the future in the standard way, then anything more than 50 years into the future is discounted to zero and I think that we ought to ask ‘Is that the right way to think about future generations?’ You could perhaps instead have a different principle that we should not discriminate on the grounds of date of birth. We should value the welfare of a newborn baby as much as of someone who is middle aged. And if we take that line instead of straight economic criteria, we would surely be motivated to do more now to minimise these potential long-term threats.’

The full discussion is available here.

Subscribe to our mailing list to get our latest updates