Centre to study technology risks to humans

26 November 2012
by Rachael Fergusson

Researchers at the University of Cambridge have proposed a new centre that will study the risks of technology to humans.

The researchers - which include a philosopher, a scientist, and a software engineer - have come together to propose the new centre at Cambridge. The Centre for the Study of Existential Risk (CSER) would address developments in human technologies that might pose “extinction-level” risks to the human species, looking at developments in bio and nanotechnology, to extreme climate change and even artificial intelligence.

“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” said Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders.

Price was speaking about the possible impact of Irving John ‘Jack’ Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as it is called today. Good, wrote a paper for New Scientist in 1965 called 'Speculations concerning the first ultra-intelligent machine.' In it, he wrote that that the ultra-intelligent machine would be the “last invention” that mankind would ever make, leading to an “intelligence explosion” - an exponential increase in self-generating machine intelligence.

For Good, who went on to advise 'Stanley Kubrick on 2001: a Space Odyssey,' the “survival of man” depended on the construction of this ultra-intelligent machine.

Subscribe to our mailing list to get our latest updates