10 years ago, Huw Price and Jaan Tallinn met in the back of a cab in Copenhagen, chatting about global catastrophic risk – and how much more research was desperately needed if we were to stand a chance in managing those risks. They met up with Lord Martin Rees who had almost a decade earlier authored “Our Final Century?’ (republished in the US market as ‘Our Final Hour’, without the question mark…). Together, the three co-founded the Centre for the Study of Existential Risk.
We recently held an internal anniversary panel to mark ten years of CSER. The panel included co-founders Martin Rees and Jaan Tallinn, our Executive Director Seán Ó hÉigeartaigh, and one of our earliest postdocs, Lalitha Sundaram. The panellists first discussed their routes to existential risk, then what has changed with CSER and the wider field of existential risk, before digging into two specific topics: biorisk and climate change.
Martin was involved with CND as a student, then with the Pugwash Conferences in the 1980s – he had always been a ‘concerned scientist’. In 2005, he moved away from full-time research to engage in more science policy, as President of the Royal Society. He has consistently pressed Cambridge University to engage more with ‘big picture’ policy questions. It has the resources to convene experts, and to contribute to scientific rigour in government policy. Especially given its status as (arguably) the top science university in Europe, Cambridge should do its bit to help on important policy topics.
Jaan noted that the three co-founders had three different visions for the Centre: one with strong links to the UK government, one that would boost the impact of the topic in the world, and one that would do ‘hardcore’, in-depth thinking and academic research. We have attempted to weave all three together in our mission on our website:
“We are an interdisciplinary research centre within the University of Cambridge who study existential risks, develop collaborative strategies to reduce them, and foster a global community of academics, technologists and policy-makers working to safeguard humanity.”
Jaan thought that three problems faced existential risk in 2011: funding, talent and reputation. The new, emerging field needed more money, more talented researchers, and needed to establish itself as a credible academic field. These three are necessary to give academics the confidence to ‘bet’ on these research topics.
Lalitha Sundaram was a synthetic biologist, working at a Cambridge lab. She became involved with CSER through a ‘Responsible Research and Innovation’ lunchtime discussion series – when she was first introduced to the term ‘existential risk’. In her first week at CSER, some media articles came out in Wired on ‘supermen saving the world’ and a ‘top 10 risks list’, which didn’t fit the much more rigorous discussions happening internally. She was worried about it being dismissed as ‘hypey’, ‘flakey doom-mongering’ or as a vanity project (another way for Martin to publish books?). But she thinks the field has changed substantially. Why? It is much more serious.
Jaan noted the field still needs more ‘research muscle’, but that funding availability is moving in a positive direction – and that the reputation of the field has greatly improved (helped in part by having a centre with ‘Cambridge University’ and ‘existential risk’ in the title…).
Sean also observed that we have been ramping up our research over recent years. He noted especially that in the last few months we have had five papers in Nature group journals, that one of our papers passed 500 citations, and that we were making substantial contributions to policy, notably on AI and climate change.
Lalitha discussed our approach to building the field with interdisciplinary ‘academic engineering’. The field has widened and become more diverse in its methodological approaches. There are more topics on the agenda, and more approaches by young scholars. We have become better at discussing existential risk – especially collaborating with scientific and technical communities at the cutting edge. Instead of pointing fingers at these communities and saying “you’re a risk”, we are collaborating to reduce risk.
Martin agreed, arguing that the field is now viewed as the rigorous study of global risks by academics and policy-makers. Covid was helpful in enabling people to realise that. Indeed, imagine how much worse would Covid have been if, for example, the internet had crashed? AI issues have got more attention and funding, and new centres such as Turing and Ada Lovelace Institute have been established. Sir Partha (chair of our Management Committee) has written the UK Government’s Independent Review of Economics and Biodiversity, which may become as influential as the Stern Review.
In summary, people have been doing good work, being vocal about it, and it is being heard.Biorisks and climate change
The panel dived into two specific topics: biorisk and climate change.
On biorisks, the panellists noted that ‘biorisks’ covered a lot of ground. The threat model can be broken down into: natural pandemics; state programmes; and bioterrorism. As Covid shows, there is a lot still to do when preparing for and responding to natural pandemics. On state programmes, the indiscriminate nature of biological weapons may have helped us by lowering their military utility. Nevertheless, the largest weapons programmes have been state-led rather than non-state, and there has perhaps been too much focus on the prospects for bioterrorism over the last twenty years.
Lalitha noted that a single bioterrorist killing large numbers of people appears very unlikely at the moment. Of course, this might change over the next 10-20 years as it becomes easier to do new, powerful things with synthetic biology.
One should look at communities involved in developing biotechnology, how are they constrained by norms and technology, how are they thinking about safety, and how they will evolve. These connections and engagement create a solid basis for thinking ahead. We should ask how advances might help (for example new materials or foods) but also what new vulnerabilities might be created by these advances, and how can we design systems to be resilient.
On climate change, Sean emphasised that ‘how could climate change kill everyone’ is the wrong question – the right question is how do certain climate change scenarios make the world much less safe? How can climate change affect global catastrophic risk at a system level, through interaction effects, and worsen our vulnerability and exposure to hazards?
Martin argued for a focus on ‘tail scenarios’. These are judged perhaps less than 5% likely, but could be twenty times worse. We still need more research on sensitivity, tipping points, equilibria at particular temperatures, climate modelling. Funding should be at the level of health or defence research.
In the past, CSER researchers maybe thought we couldn’t contribute much, as a small fish in a very large pond. But our research on climate change’s contribution to existential risk, Ellen Quigley’s work on responsible investment with large asset owners, Natalie Jones’ work on fossil fuel production (and more!) has strengthened our confidence that we can indeed have impact on the margin here.
Jaan emphasised that in his experience as an investor, there has been an explosion in the last three years in terms of new renewable energy start-ups. This is driven partly by prevailing norms, of course, but even more so by the dramatic trends in solar, wind and storage of lower costs and more investment.
Looking forward, what should we expect? Sean noted that developments are likely to continue to happen quite quickly. CSER now has a track record of impact, and should plan for more opportunities.
As a centre and as individuals, we will have to address the slight time tradeoff between standard academic routes (writing and teaching) and influencing policy. What will our mix be – how many of our researchers want to be professors, as opposed to operating within governments or NGOs? Some will perhaps go deep on one or the other – but continue to collaborate closely. Engineering is perhaps a model – a field in which the people who build the machines are similarly respected to those that publish papers.