6 Recent Publications on Existential Risk (Aug 2020 update)

10 September 2020

Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. All previous updates can be found here. The following are a selection of those papers identified this month.

Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.

1. Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms 

We often hear that the ‘end of the world’ is approaching – but whose world, exactly, is expected to end? Over the last several decades, a popular and influential literature has emerged, in international relations (IR), social sciences, and in popular culture, on subjects such as ‘human extinction’, ‘global catastrophic risks’, and eco-apocalypse. Written by scientists, political scientists, and journalists for wide public audiences,1 this genre diagnoses what it considers the most serious global threats and offers strategies to protect the future of ‘humanity’. This article will critically engage this genre to two ends: first, we aim to show that the present apocalyptic narratives embed a series of problematic assumptions which reveal that they are motivated not by a general concern with futures but rather with the task of securing white futures. Second, we seek to highlight how visions drawn from Black, Indigenous and People of Color (BIPOC) futurisms reimagine more just and vibrant futures. 

2. Pandemics, existential and non-existential risks to humanity 

The pandemic caused by the SARS-Cov-2 virus has initiated an era of structural economic stagnation. With it, we have crossed a threshold in which the planet's so-called "ecosystemic services" have started becoming "ecosystemic disservices". Covid-19 is one of those disservices. In itself, it certainly does not constitute an existential risk to humanity. But what will be discussed here is the existence of a clear dividing line between existential and non-existential risks. Frequently, an existential risk results from a set of crises that, separately, do not existentially threaten humanity. Nevertheless, combined and acting in synergy, those crises have the potential to do so. The current pandemic points at the chance for a great civilizational shift, probably the last chance before environmental imbalances escape societies' control. 

3. Fully Autonomous AI 

In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the capacity to “give oneself the law,” to decide by oneself what one’s goal or principle of action will be. The predominant view in the literature on the long-term prospects and risks of artificial intelligence is that an artificial agent cannot exhibit such autonomy because it cannot rationally change its own final goal, since changing the final goal is counterproductive with respect to that goal and hence undesirable. The aim of this paper is to challenge this view by showing that it is based on questionable assumptions about the nature of goals and values. I argue that a general AI may very well come to modify its final goal in the course of developing its understanding of the world. This has important implications for how we are to assess the long-term prospects and risks of artificial intelligence.

4. Existential risk assessment: A reply to Baum 

We welcome Seth Baum's reply to our paper. We broadly agree with the points Baum makes; however, the field of Existential Risk Studies remains young and undeveloped, and we think that there are many points on which further reflection is needed. We briefly discuss three: the normative aspects of terms like 'existential catastrophe,' the opportunities for low hanging fruit in method selection and application, and the importance of context when making probability claims.  

5. Five challenges to humanity: Learning from pattern/repeat failures in past disasters?

Human civilisation faces a series of existential threats from the combination of five global and human-engineered challenges, namely climate change, resource depletion, environmental degradation, overpopulation and rising social inequality. These challenges are arguably being manifested in both an increased likelihood and magnified impact of catastrophes like forest fires, prolonged droughts, pandemics and social dislocation/upheaval. This article argues that in understanding and addressing these challenges, important lessons can be drawn from what has repeatedly caused organisational failures. It applies the ‘Ten Pathways to Disaster’ model to a series of disasters/catastrophic events and then argues this model is salient to understanding inadequate responses to the five threats to civilisation. The article argues that because these challenges interact in mutually reinforcing ways, it is critical to address them simultaneously not in isolation.  

6. Artificial versus biological intelligence in the Cosmos: Clues from a stochastic analysis of the Drake equation 

The Drake equation has been used many times to estimate the number of observable civilizations in the galaxy. However, the uncertainty of the outcome is so great that any individual result is of limited use, as predictions can range from a handful of observable civilizations in the observable universe to tens of millions per Milky Way-sized galaxy. A statistical investigation shows that the Drake equation, despite its uncertainties, delivers robust predictions of the likelihood that the prevalent form of intelligence in the universe is artificial rather than biological. The likelihood of artificial intelligence far exceeds the likelihood of biological intelligence in all cases investigated. This conclusion is contingent upon a limited number of plausible assumptions. The significance of this outcome for the Fermi paradox is discussed.  

Subscribe to our mailing list to get our latest updates