Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. All previous updates can be found here. The following are a selection of those papers identified this month.
Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.
In this perspective, we consider the possible role of the United Nations (UN) with respect to existential risks to human civilization and the survival of humanity. We illustrate how existential risks have been discussed at an international governance level, specifically in documents in the UN Digital Library. In this large corpus, discussions of nuclear war account for over two-thirds (69%, 67/97) of mentions of existential risks, while mention of other existential risks, or such risks as a category, appears scant. We take these observations to imply inadequate attention to these significant threats. These deficits, combined with the need for a global response to many risks, suggest that UN member nations should urgently advocate for appropriate action at the UN to address threats, such as artificial intelligence, synthetic biology, geoengineering, and supervolcanic eruption, in analogous fashion to existing attempts to mitigate the threats from nuclear war or near-Earth objects.
There has been extensive attention to near-term and long-term AI technology and its accompanying societal issues, but the medium-term has gone largely overlooked. This paper develops the concept of medium-term AI, evaluates its importance, and analyzes some medium-term societal issues. Medium-term AI can be important in its own right and as a topic that can bridge the sometimes acrimonious divide between those who favor attention to near-term AI and those who prefer the long-term. The paper proposes the medium-term AI hypothesis: the medium-term is important from the perspectives of those who favor attention to near-term AI as well as those who favor attention to long-term AI. The paper analyzes medium-term AI in terms of governance institutions, collective action, corporate AI development, and military/national security communities. Across portions of these four areas, some support for the medium-term AI hypothesis is found, though in some cases the matter is unclear.
Accumulating evidence using crowdsourcing and machine learning: a living bibliography about existential risk and global catastrophic risk
Peer-reviewed paper by Gorm Shackelford, Luke Kemp, Catherine Rhodes, Lalitha Sundaram, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Julius Weitzdörfer, Shahar Avin, Dag Sørebø, Elliot M. Jones, John B. Hume, David Price, David Pyle, Daniel Hurt, Theodore Stone, Harry Watkins, Lydia Collas, William Sutherland