Four Month Report January 2020 - June 2020

16 September 2020

We send short monthly updates in our newsletter – subscribe here.

READ AS PDF

Contents

  1. Coronavirus update
  2. New Staff
  3. Policy and industry engagement – Impact
  4. Public engagement – Field-building
  5. Academic engagement – Field-building
  6. Publications

Overview

The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to civilizational collapse or human extinction. Our research focuses on Global Catastrophic Biological Risks, Extreme Risks and the Global Environment, Risks from Artificial Intelligence, and Managing Extreme Technological Risks. Our work is shaped around three main goals:

  • Understanding: we study existential risk.
  • Impact: we develop collaborative strategies to reduce existential risk.
  • Field-building: we foster a global community of academics, technologists and policy-makers working to tackle existential risk.

Our last Six Month Report was in February 2020. Since then, we have continued to advance existential risk research and grow the field. This report outlines our past and future plans and activities. Highlights of the last four months include:

  • Publication of seven peer-reviewed papers on: experimentation in biosecurity governance; activism by the AI community; exploring AI futures through role play; moving beyond near- and long-term in AI ethics; whether AI governance should be centralised; whether activist investors should divest or engage; and why human extinction is morally wrong.
  • Publication of two reports on: mechanisms to move toward trustworthy AI development, and a solution scan of non-pharmaceutical options to reduce SARS-CoV-2 transmission.
  • The launch of the Future Generations Bill, based on CSER research.
  • Hosting several interdisciplinary workshops, presenting accepted papers at conferences, and hosting Matthew Meselson.
  • A public lecture, video and podcast from the President of the Bulletin of the Atomic Scientists on: Why is the Doomsday Clock the closest it’s ever been to midnight?
  • Extensive media coverage of our papers and reports, and of our researchers.
  • Recruiting five new team members.

1. Coronavirus update

CSER staff are currently working remotely and will continue to do so until the University and Government advise otherwise. We are fortunate that most of our work can be done remotely, but of course some of our planned activities have had to change:

  • Our events programme is paused and no public lectures or in-person workshops are scheduled for the next few months. Some of our planned workshops are shifting to use of remote options.
  • We have postponed our Cambridge Conference on Catastrophic Risk until 17- 18 November. Most of the speakers have confirmed their availability for these dates. We will keep these plans under review over the next few months, adjusting, for example, as to whether international travel will be advisable.
  • We were intending to run a Summer Research Visitor programme, as we did last year. That is now on hold, and we are considering other options, including postponing or remote mentoring.
  • We were part way through the recruitment process for a new administrative assistant, but had to close this recruitment, and will re-advertise the position as soon as this becomes possible.

The team is adjusting to the new circumstances well, with increased use of Slack and video conferencing. We have recently re-started our weekly team meetings / work-in-progress seminars. We are in regular contact with all our staff. A public update has been provided on our website to inform the wider community.

We are taking care to balance our long-term research projects with ideas and opportunities that are arising in the context of COVID-19. One of the projects currently in progress, led by Prof Bill Sutherland, in collaboration with the BioRISC project, and Conservation Evidence, is a solution scan of non-pharmaceutical options to reduce SARS-CoV-2 transmission. The preprint has received extensive media coverage.

2. New Staff

To support organisation of our 2020 conference, we recruited a part-time administrative assistant in February. Annie Bacon provided great support in this role, and as we have had to put the recruitment of a full-time administrative assistant on hold, she will be staying with us for at least another few months.

We recently completed recruitment for three postdoctoral research associates and one senior research associate. We will announce further details of these new staff once they have started.  

3. Policy and industry engagement – Impact

We have met and engaged with policymakers and institutions across the world who are grappling with the challenge of supporting the socially beneficial aspects of new technologies while mitigating their risks. Through these personal connections and institutional advice, we have had the opportunity to reframe key aspects of the policy debate. In addition, we continued our flourishing collaboration with corporate leaders. Extending our links improves our research and allows us to advise companies on more responsible practices.

  • 22 January: Haydn Belfield attended the Spectator’s Parliamentarian of the Year Award and networked with MPs and ministers.
  • 23 January: Visit from Jingying Yang from the Partnership on AI.
  • 21-25 January: Ellen Quigley in Davos to meet decision-makers.
  • 12 February: The Future Generations Bill Launch in Parliament: Lord Bird's draft legislation was introduced to MPs and peers in a packed House of Lords reception. Organised by the Today For Tomorrow cross-party campaign supported by The Big Issue Group and the APPG for Future Generations. The Bill draws on CSER research, especially the paper Representation of future generations in United Kingdom policy-making. The campaign for a Future Generations Bill was also covered in: i newsThe HouseThe Guardian, and The Financial Times

4. Public engagement – Field-building

  • 3 March: Blavatnik Public Lecture – The President and CEO of the Bulletin of the Atomic Scientists, Rachel Bronson, on: Why is the Doomsday Clock the closest it’s ever been to Midnight? Video. We also arranged for David Runciman to interview her on a special Talking Politics podcast episode.
  • Our solution scan preprint was on the Cambridge University blog, and received wide media coverage.
  • Asaf Tzachor’s paper on ‘future foods’ was covered by the BBC.
  • The Science paper on biosecurity governance was covered by the AAAS News.
  • Catherine Rhodes was quoted on COVID-19 in the Guardian, BBC World Service and Newsweek.
  • Simon Beard was featured in a programme on ‘Cathedral Thinkers’ (BBC Radio 4), and in an interview on climate change and existential risk (Grist).

We’re reaching more and more people with our research:

  • 15,231 website visitors over the last two months.
  • 7,315 newsletter subscribers.
  • 9,267 Twitter followers.
  • 2,800 Facebook followers.
  • Haydn Belfield wrote on how Keir Starmer should scrutinise the automation and surveillance aspects of the government’s COVID-19 response for the Daily Mirror, and wrote a quick blog on the trustworthy AI development report, and was quoted in the Financial Times.
  • Lord Martin Rees appeared on Sean Carroll’s Mindscape podcast.
  • CSER research assistant Catherine Richards has been working with Lord Martin Rees to develop a new website about his work. This will launch within the next few weeks, alongside a Twitter account.

2.2 Academic engagement – Field-building

As an interdisciplinary research centre within Cambridge University, we seek to grow the academic field of existential risk research, so that it receives the rigorous and detailed attention it deserves.

  • 28 January: workshop on Assessing global catastrophic risk potential of communicable chronic diseases, led by Lauren Holt.
  • 6-7 February 2020: AI safety landscape and SafeAI workshop, AAAI-20, New York. Co-organised by Sean O hEigeartaigh, Jose Hernandez-Orallo, and international colleagues.
  • 7-8 February 2020: Haydn Belfield, Jess Whittlestone, and Ross Gruetzemacher (CSER Summer Research Visitor, 2019) presented their three accepted papers at the 2020 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES conference), and met many collaborators at AAAI.
  • 7 March: Catherine Rhodes gave a talk on Avoiding Catastrophe from Misuse of Biology at the Student Young Pugwash Conference, Warwick University.
  • 12 – 14 March: Visit from Matthew Meselson, a leading molecular biologist, and someone who has played a substantial role in biological and chemical security, especially the development of the Biological Weapons Convention.
  • 18/19 March: Jess Whittlestone and Haydn Belfield mentored students at a Cambridge Summer Programme in Applied Reasoning (CaSPAR) online workshop, organised by CSER Visitor Jaime Sevilla.
  • 20, 21, 22 March: Haydn Belfield remotely participated in the Foresight Institute’s 2020 AGI Strategy Meeting and Effective Altruism Global.

We continued our weekly ‘Work-in-Progress’ series:

  • 27 January: Simon Beard – How to define Global Catastrophic Environmental Risk
  • 3 February: Jaime Sevilla – Modelling Vantage Points
  • 17 February: Michael Richardson, University of New South Wales / Visiting Fellow at Goldsmiths – Drones and the Problem of Witnessing Remote War
  • 24 February: Shin-Shin Hua, CSER Research Affiliate – Competition Law and Cooperation
  • 4 March: Roundtable with Rachel Bronson, President and CEO of the Bulletin of the Atomic Scientists.
  • 6 March: Seán Ó hÉigeartaigh
  • 9 March: Simon Beard – A Science of Global Risk
  • 16 March: Shivam Patel – Modelling the landscape of Machine Learning research

Publications

Peer-reviewed papers:

  • Sam Weiss Evans, Jacob Beal, Kavita Berger, Diederik A. Bleijs, Alessia Cagnetti, Francesca Ceroni, Gerald L. Epstein, Natàlia Garcia-Reyero, David R. Gillum, Graeme Harkess, Nathan J. Hillson, Petra A. M. Hogervorst, Jacob L. Jordan, Geneviève Lacroix, Rebecca Moritz, Seán Ó hÉigeartaigh, Megan J. Palmer, Mark W. J. van Passel. (2020). Embrace experimentation in biosecurity governance. Science.
    • “As biological research and its applications rapidly evolve, new attempts at the governance of biology are emerging, challenging traditional assumptions about how science works and who is responsible for governing. However, these governance approaches often are not evaluated, analyzed, or compared. This hinders the building of a cumulative base of experience and opportunities for learning. Consider “biosecurity governance,” a term with no internationally agreed definition, here defined as the processes that influence behavior to prevent or deter misuse of biological science and technology. Changes in technical, social, and political environments, coupled with the emergence of natural diseases such as coronavirus disease 2019 (COVID-19), are testing existing governance processes. This has led some communities to look beyond existing biosecurity models, policies, and procedures. But without systematic analysis and learning across them, it is hard to know what works. We suggest that activities focused on rethinking biosecurity governance present opportunities to “experiment” with new sets of assumptions about the relationship among biology, security, and society, leading to the development, assessment, and iteration of governance hypotheses.”
  • Haydn Belfield. (2020). Activism in the AI Community: Analysing Recent Achievements and Future Prospects. Proceedings of the 2020 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.
    • “The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI ‘talent’. Both are crucial to the future of AI activism and worthy of sustained attention.”
  • Shahar Avin, Ross Gruetzemacher, James Fox. (2020). Exploring AI Futures Through Role Play. Proceedings of the 2020 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.
    • “We present an innovative methodology for studying and teaching the impacts of AI through a role play game. The game serves two primary purposes: 1) training AI developers and AI policy professionals to reflect on and prepare for future social and ethical challenges related to AI and 2) exploring possible futures involving AI technology development, deployment, social impacts, and governance. While the game currently focuses on the inter relations between short --, mid and long term impacts of AI, it has potential to be adapted for a broad range of scenarios, exploring in greater depths issues of AI policy research and affording training within organizations. The game presented here has undergone two years of development and has been tested through over 30 events involving between 3 and 70 participants. The game is under active development, but preliminary findings suggest that role play is a promising methodology for both exploring AI futures and training individuals and organizations in thinking about, and reflecting on, the impacts of AI and strategic mistakes that can be avoided today.”
  • Carina Prunkl and Jess Whittlestone. (2020). Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society. Proceedings of the 2020 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.
    • “One way of carving up the broad "AI ethics and society" research space that has emerged in recent years is to distinguish between "near-term" and "long-term" research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.”
  • Peter Cihon, Matthijs M. Maas and Luke Kemp. (2020). Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History. Proceedings of the 2020 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.
    • “Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.”
  • Simon Beard and Patrick Kaczmarek. (2020). On the Wrongness of Human Extinction. Argumenta.
    • “Most people agree that human extinction would be very bad and that causing, or failing to prevent it would, therefore, be very wrong. However, the reasons why it would be wrong, and the precise degree of its wrongness, remain a topic of debate amongst philosophers. In recent papers, Elizabeth Finneron-Burns and Johann Frick have both argued that it is not a wrong-making feature of human extinction that it would cause many potential people with lives worth living never to be born, and hence that causing human extinction would be, in at least one way, less wrong than many have thought. In making these arguments, both assume that merely possible future people cannot be harmed by their nonexistence, and thus do not have any claim to be brought into existence. In this paper, we raise objections to their arguments and suggest that there is nothing inherent in the moral theories they put forward that implies future people cannot have this sort of ‘existential’ claim. In doing so, we draw on the work of Derek Parfit, who argued, in a recent paper, that coming into existence benefits a person in that it is ‘good for’ them, even if it is not ‘better for’ them than non-existence. We also find that many of their objections to the view that it is wrong not to bring future people into existence rest on the assumption that, were these people to have claims on us, these must be equivalent to the claims that existing people have to be benefitted. However, we show that Parfit’s work demonstrates how this is not the case.”
  • David Chambers, Elroy Dimson and Ellen Quigley. (2020). To Divest or to Engage? A Case Study of Investor Responses to Climate Activism. The Journal of Investing ESG Special Issue.
    • “An increasing number of investors face the dilemma of whether to divest from environmentally damaging businesses or enter into a dialogue with them. This debate has now taken root in Cambridge, England, where the ancient University of Cambridge confronts great pressure from students and staff members to respond to the threat of climate breakdown. Having already received two reports on its approach to responsible investment, the university appointed a new chief investment officer (CIO) who, alongside the Cambridge University Endowment Fund (CUEF) Trustee and the wider university community, needs to consider the question of whether to divest from, or to engage with, fossil fuel firms. What would be the financial effects of each choice, and what would be the outcome in terms of environmental impact? This case study describes the background and the research behind the debate. The new CIO and her colleague, the chief financial officer, reflect on both sides of this debate as they consider the position the university should adopt. In contrast with other journal articles, this one does not propose solutions. Instead, it asks the reader to consider the arguments and to take a position.”

Reports/preprints:

  • Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jon Lebensold, Cullen O'Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Martiza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, Shagun Sodhani, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Beth Barnes, Allan Dafoe, Paul Scharre, Martijn Rasser, David Kreuger, Carrick Flynn, Ariel Herbert-Voss, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Markus Anderljung, and Yoshua Bengio. (2020). Toward Trustworthy AI: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213.
    • “The increasingly widespread application of AI research has brought growing awareness of the risks posed by AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.
    • In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction.
    • This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The authors believe the implementation of such mechanisms can help make progress on one component of the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.”
  • William Sutherland, David C. Aldridge, Philip Martin, Catherine Rhodes, Gorm Shackelford, Simon Beard, Andrew J. Bladon, Cameron Brick, Mark Burgman, Alec P. Christie, Lynn V. Dicks, Andrew P. Dobson, Harriet Downey, Fangyuan Hua, Amelia S.C. Hood, Alice C. Hughes, Rebecca M. Jarvis, Douglas MacFarlane, Anne-Christine Mupepele, William H. Morgan, Seán Ó hÉigeartaigh, Stefan J. Marciniak, Cassidy Nelson, Clarissa Rios Rojas, Katherine A. Sainsbury, Rebecca K. Smith, Lalitha Sundaram, Hannah Tankard, Nigel G. Taylor, Ann Thornton, John Watkins, Thomas B. White, Kate Willott, Silviu O. Petrovan, Haydn Belfield. (2020). Informing management of lockdowns and a phased return to normality: a Solution Scan of non-pharmaceutical options to reduce SARS-CoV-2 transmission. https://covid-19.biorisc.com/
    • “We have identified 275 options to reduce SARS-CoV-2 transmission in five key areas: (1) physical isolation, (2) reducing transmission through contaminated items, (3) enhancing cleaning and hygiene, (4) reducing spread through pets, and (5) restricting disease spread between areas. For any particular problem this long list will quickly be winnowed down to a much shorter list of potential options based on relevance and practicality; this bespoke shortlist will be the subject of more detailed consideration. We stress that the listing of an option should not be seen as a recommendation or a suggestion that it is beneficial. Deciding whether to adopt any of these actions involves policy makers and practitioners considering the evidence for the importance of the transmission risk and likely effectiveness, as well as its cost, practicality and fairness.”

Subscribe to our mailing list to get our latest updates