Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. All previous updates can be found here. The following are a selection of those papers identified this month.
Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.
The catastrophic risks posed by new technologies such as killer robots and geoengineering have triggered calls for halting new research. Arguments for restricting research typically have a slippery-slope structure: Researching A will lead to deployment; we have decisive moral reasons against deployment; therefore, we should not research A. However, scientific uncertainty makes it difficult to prove or disprove the conclusion of slippery-slope arguments. This article accepts this indeterminacy and asks whether and when it would be permissible to restrict research under empirical and normative uncertainty. The argument starts from the ethical framework for the regulation of scientific research with human subjects and offers modifications to adapt it to the purpose of restricting new technologies. Two main questions arise in the process: whether it is permissible to impose restrictions at the research stage to prevent harms that will arise from the use of a technology and whether it is permissible to restrict research preemptively on the grounds of public fear and anxiety, before there is sufficient evidence establishing the risk of harm. I answer both questions in the affirmative and defend this position against objections.
In a well-known paper, Nick Bostrom presents a confrontation between a fictionalised Blaise Pascal and a mysterious mugger. The mugger persuades Pascal to hand over his wallet by exploiting Pascal's commitment to expected utility maximisation. He does so by offering Pascal an astronomically high reward such that, despite Pascal's low credence in the mugger's truthfulness, the expected utility of accepting the mugging is higher than rejecting it. In this article, I present another sort of high value, low credence mugging. This time, the mugger utilises research on existential risk and the long-term potential of humanity to exploit Pascal's expected-utility-maximising descendant. This mugging is more insidious than Bostrom's original as it relies on plausible facts about the longterm future, as well as realistic credences about how our everyday actions could, albeit with infinitesimally low likelihood, affect the future of humanity.
The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralised? We draw on the history of environment, trade, and security regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak for centralisation. The risk of creating a slow and brittle institution, and the difficulty of pairing deep rules with adequate participation, speak against it. Other considerations depend on the specific design. A centralised body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial, and fragmented institutions could self-organise. In sum, these trade-offs should inform development of the AI governance architecture, which is only now emerging. We apply the trade-offs to the case of the potential development of high-level machine intelligence. We conclude with two recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, fragmentation will likely persist for now. The developing landscape should be monitored to see if it is self-organising or simply inadequate.
Posthumanism is a philosophy that is based on the critique of the negative consequences of humanism and the correct conceptualization about the possibility of the posthuman. This study takes these posthuman claims at face value by elaborating a subsequent connection of posthumanism to the problem of the meaning of the possibility of human extinction. Historically, the demise of the world and humanity has been formulated in relation to the problem of the apocalypse. Thus, the first part of the study is dedicated to the description of two basic posthuman narratives (posthuman ecologies) about the end of the man. The second part of the study is focused on a critical analysis of the posthuman approach to the convergent problem of human species extinction. We will argue, that some types of posthumanism fail, rather paradoxically, to properly grasp and solve the problem of the meaning of the world after (post) humanity.
5. Reynolds J.L. (2020) Is solar geoengineering ungovernable? A critical assessment of governance challenges identified by the Intergovernmental Panel on Climate Change, in Wiley Interdisciplinary Reviews: Climate Change
Solar radiation modification (SRM) could greatly reduce climate change and associated risks. Yet it has not been well-received by the climate change expert community. This is evident in the authoritative reports of the Intergovernmental Panel on Climate Change (IPCC), which emphasize SRM's governance, political, social, and ethical challenges. I find seven such challenges identified in the IPCC reports: that SRM could lessen mitigation; that its termination would cause severe climatic impacts; that researching SRM would create a “slippery slope” to its inevitable and unwanted use; that decisions to use it could be contrary to democratic norms; that the public may not accept SRM; that it could be unethical; and that decisions to use SRM could be unilateral. After assessing the extent to which these challenges are supported by existing evidence, scholarly literature, and robust logic, I conclude that, for six of the seven, the IPCC's claims variously are speculative, fail to consider both advantages and disadvantages, implicitly make unreasonable negative assumptions, are contrary to existing evidence, and/or are meaninglessly vague. I suggest some reasons for the reports' failure to meet the IPCC's standards of balance, thoroughness, and accuracy, and recommend a dedicated Special Report on SRM. This article is categorized under: Integrated Assessment of Climate Change > Assessing Climate Change in the Context of Other Issues.
What is the future of ‘environmental’ policy in times of earth system transformations and the recognition of the ‘Anthropocene’ as a new epoch in planetary history? I argue that fifty years after the 1972 Stockholm Conference on the Human Environment, we need to revisit the ‘environmental policy’ paradigm because it falls short on five grounds. The paradigm (a) emphasizes a dichotomy of ‘humans’ and ‘nature’ that is no longer defensible; (b) is incompatible with more integrated research concepts that have overcome this human-environment dichotomy; (c) deemphasizes questions of planetary justice and democracy; (d) fails to deal with novel normative challenges of the Anthropocene; and (e) may risk political marginalization of central concerns of human and non-human survival. In the second part I discuss institutional implications, arguing for novel approaches in science collaboration, new institutional arrangements and a more central place for questions of planetary justice and earth-system risks in governance.