12 Publications: Update on our research

24 January 2020

In the three months from November to January CSER published six peer reviewed academic papers and six reports on the nature, ethics and governance of existential risk as we begin to utilise new methods from our developing science of global risk.

Firstly a team of researchers led by Gorm Shackelford (and also including CSER researchers Luke Kemp, Catherine Rhodes, Lalitha Sundaram, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Julius Weitzdörfer and Shahar Avin) published a paper on accumulating evidence using crowdsourcing and machine learning. It describes the creation of TERRA, our human-assisted machine learning tool which produces a living bibliography of existential risk research updated each month. We are now publishing a selection of new additions to this bibliography each month (see January’s update) and will shortly be undertaking development work to further improve TERRA and its user interface.

Drawing on this bibliography and other sources Simon Beard, Thomas Rowe, and James Fox reviewed studies of different drivers of existential risk in order to produce an analysis and evaluation of methods currently used to quantify the likelihood of existential hazards. This calls for a more critical approach to methodology and the use of quantified claims by people aiming to contribute research to the management of Existential Risk, and argues that a greater awareness of the diverse methods available to these researchers should form an important part of this. CSER is now incorporating the insights from this analysis into planning future studies of global catastrophic and existential risk.

Patrick Kaczmarek and Simon Beard have published a paper on the ethics of human extinction. Human Extinction and Our Obligations to the Past (2019) examines whether non-consequentialist theories should make strong claims about the need to avoid human extinction. They argue that in at least one case, the ‘contractualist’ ethics of T. M. Scanlon, the answer is yes. Simon is now preparing a more general survey of how different ethical theories respond to human extinction.

Turning from human extinction to civilization collapse, Sabin Roman and Erika Palmer published a quantified model of the growth and decline of the Western Roman Empire in order to understand the interdependent dynamics of army size, conquered territory and the production and debasement of coins. This argued that a high degree of centralized control was necessary, in line with basic tenets of structural-demographic theory. This was the third model of a historical civilization collapse produced by Sabin and he will seek to incorporate them into a generalized model that can be applied to contemporary civilization over the coming year.

Julius Weitzdörfer and Simon Beard published a chapter on Law and Policy Responses to Disaster-Induced Financial Distress. This studies how disaster response and social justice both helped and hindered recovery from the Fukushima disaster in Japan, the single most costly disaster in human history. It made a number of recommendations for improving recovery from mega disasters, in particular focusing on the need for disaster justice to be forward facing – seeking to build future resilience – rather than backward facing – seeking to rebuild the status quo with its many imperfections.

Finally, Jess Whittlestone and Aviv Ovadya published a paper exploring the tension between openness and prudence in responsible AI research as part of the NeurIPS 2019 Joint Workshop on AI for Social Good. The paper discusses how different beliefs and values can lead to differing perspectives on how the artificial intelligence (AI) community should manage this tension, and considers implications for what responsible publication norms in AI research might look like in practice.

Reports:

Luke Kemp and Catherine Rhodes published a report for the Global Challenges Foundation, titled The Cartography of Global Catastrophic Governance. This highlighted how fragmented the international governance of global catastrophic risks (GCRs) is and provides an overview of these international governance arrangements for eight different GCR hazards and two drivers. It found that there are clusters of dedicated regulation and action, including in nuclear warfare, climate change, pandemics, and biological and chemical warfare but that their effectiveness is often questionable. For others, such as catastrophic uses of AI, asteroid impacts, solar geoengineering, unknown risks, super-volcanic eruptions, inequality and many areas of ecological collapse, the legal landscape is littered more with gaps than effective policy.

Simon Beard and Phil Torres also published a report for the Global Challenges Foundation, titled Identifying and Assessing the Drivers of Global Catastrophic Risk (forthcoming). This highlights how the body of emerging GCR research has failed to produce sufficient progress towards establishing a unified methodological framework for studying these risks and pointed to key steps that will help to produce such a framework, include: moving away from a hazard-focused conception of risk; and diversifying the political, philosophical, and economic context of the field.

Ross Gruetzemacher and Jess Whittlestone published a preprint paper on arXiv titled Defining and Unpacking Transformative AI. This found three different uses of the term ‘Transformative AI (TAI): to describe levels of societal transformation associated with previous 'general purpose technologies'; to refer to more drastic levels of transformation comparable to the agricultural or industrial revolutions; and in a much looser sense to describe the effects current AI systems are already having on society. The paper proposes distinguishing TAI from radically transformative AI (RTAI), roughly corresponding to societal change on the level of the agricultural or industrial revolutions and consider the relationship between TAI and RTAI and whether we should necessarily expect a period of TAI to precede the emergence of RTAI.

Karina Vold and Jess Whittlestone published a chapter in the IE Report on Data, Privacy and the Individual on Privacy, autonomy and personalised targeting, which focuses on how information about personal attributes inferred from collected data (such as online behaviour), can be used to tailor messages and services to specific individuals or groups. This kind of ‘personalised targeting’ has the potential to influence individuals’ perceptions, attitudes, and choices in unprecedented ways. It argues that because it is becoming easier for companies to use collected data for influence, threats to privacy are increasingly also threats to personal autonomy—an individual’s ability to reflect on and decide freely about their values, actions, and behaviour, and to act on those choices. It makes the case that a new ethics of privacy needs to think more rigorously about how personal data may be used, and its potential impact on personal autonomy.

Natalie Jones co-authored the chapter on Policy options to close the production gap, in The Production Gap: 2019 report. This highlighted that governments have a range of policy options to regulate fossil fuel supply, including limits on new exploration and extraction and removal of subsidies for production, with some already demonstrating leadership by enacting bans on oil and gas exploration and extraction or phasing out coal extraction. It also shows how non-state actors and subnational governments can help facilitate a transition away from fossil fuels, by mobilizing constituencies and shifting investment to lower carbon options.

Finally, the CSER-supported All Party Parliamentary Group for Future Generations will shortly be publishing its first report on Managing Technological Risk, authored by Simon Beard. This brings together the key findings of its first round of public meetings with research undertaken by CSER researchers and others to make a comprehensive set of recommendations for the new parliament on how to manage future technological risks for the benefit of future generations. Topics discussed include critical infrastructure threats, AI, biosecurity, negative emissions technology and swarming drones.

Subscribe to our mailing list to get our latest updates