Technological Risks in the World Economic Forum's 2015 Global Risk Report

06 February 2015

This year, the World Economic Forum have featured risks from emerging technology in their tenth Global Risks Report. In this lucid 50-page long document, specific note is made of the challenges associated with regulating risks that are extreme or unforeseen:

The establishment of new fundamental capabilities, as is happening for example with synthetic biology and artificial intelligence, is especially associated with risks that cannot be fully assessed in the laboratory. Once the genie is out of the bottle, the possibility exists of undesirable applications or effects that could not be anticipated at the time of invention. Some of these risks could be existential – that is, endangering the future of human life… The growing complexity of new technologies, combined with a lack of scientific knowledge about their future evolution and often a lack of transparency, makes them harder for both individuals and regulatory bodies to understand.

Under this category of emerging technological risks, they feature three domains, all of which are of particular interest to CSER: artificial intelligence, synthetic biology, and gene drives. On AI, the authors reinforce the message of CSER advisors Nick Bostrom and Stuart Russell:

These and other challenges to AI progress are by now well known within the field, but a recent survey shows that the most-cited living AI scientists still expect human-level AI to be produced in the latter half of this century, if not sooner, followed (in a few years or decades) by substantially smarter-than-human AI. If they are right, such an advance would likely transform nearly every sector of human activity.

If this technological transition is handled well, it could lead to enormously higher productivity and standards of living. On the other hand, if the transition is mishandled, the consequences could be catastrophic.

Contrary to public perception and Hollywood screenplays, it does not seem likely that advanced AI will suddenly become conscious and malicious. Instead, according to a co-author of the world’s leading AI textbook, Stuart Russell of the University of California, Berkeley, the core problem is one of aligning AI goals with human goals. If smarter-than-human AIs are built with goal specifications that subtly differ from what their inventors intended, it is not clear that it will be possible to stop those AIs from using all available resources to pursue those goals, any more than chimpanzees can stop humans from doing what they want.

On synthetic biology, the report notes that some environmental risks may be substantial, suggesting that there may be a gap in regulating small to medium enterprises and amateurs:

The risk that most concerns analysts, however, is the possibility of a synthetized organism causing harm in nature, whether by error or terror. Living organisms are self-replicating and can be robust and invasive. The terror possibility is especially pertinent
because synthetic biology is “small tech” – it does not require large, expensive facilities or easily-tracked resources… The amateur synthetic biology community is very aware of safety issues and pursuing bottom-up options for self-regulation in various ways, such as developing voluntary codes of practice. However, self-regulation has been criticized as inadequate, including by a coalition of civil society groups campaigning for strong oversight mechanisms. Such mechanisms would need to account for the cross-border nature of the technology, and inherent uncertainty over its future direction.

For gene drives, a technology that is as yet still rapidly evolving, further analysis is recommended:

Scientists and regulators need to work together from an early stage to understand the challenges, opportunities and risks associated with gene drives, and agree in advance to a governance regime that would govern research, testing and release. Acting now would allow time for research into areas of uncertainty, public discussion of security and environmental concerns, and the development and testing of safety features. Governance standards or regulatory regimes need to be developed proactively and flexibly to adapt to the fast-moving development of the science.

It is encouraging that extreme technological risks are continuing to recieve attention in high-level social, economic and legal analysis.

Subscribe to our mailing list to get our latest updates