Managing Extreme Technological Risks

Existential risks share common methodological challenges: how to horizon-scan for and evaluate low-probability/high-impact events; how to encourage responsible innovation amongst technologists and a safety culture amongst scientists. But lessons can be learnt across different risks.

Risks associated with emerging and future technological advances, and impacts of human activity, threaten human extinction or civilisational collapse. Managing these extreme technological risks is an urgent task - but one that poses particular difficulties and has been comparatively neglected in academia.

Our flagship research project is to develop, implement and refine an initial model of a systematic approach to addressing how this class of risks can best be identified, managed and mitigated.


There seems to be a small but real possibility that civilization ends in the next century. This would not only be terrible for the present generation, it would permanently remove the possibility of a good future. Future generations do not have a voice in our present politics and society, but it seems discriminatory to disregard their interests.

Most people are familiar with natural risks such as asteroid impact and supervolcano eruption. We are more concerned with risks associated with human technology and activity, such as nuclear war, engineered pandemic, climate change, ecological collapse, or advanced artificial intelligence.

Our hypothesis is that studying these risks in a collaborative, interdisciplinary way within one centre - with strong links to technologists, policy-makers, and other academics - will uncover new insights and approaches.

This project was made possible through the support of a grant from Templeton World Charity Foundation, Inc. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of Templeton World Charity Foundation, Inc.

Templeton World Charity Foundation

Subscribe to our mailing list to get our latest updates