- The paper addresses the management of catastrophic and existential risks potential of general and specialized AI.
- It does so through the perspective of an EC Future and Emerging Technology Flagship, the Human Brain Project (HBP).
- It builds on Foresight, Researcher Awareness and Ethics Management work in the HBP Ethics and Society subproject.
- It illustrates key aspects of the dynamic approach to questions of ethics and society in the HBP.
- Its self-reflexive, practice-based evidence aims at guiding policy makers and communities who engage with such questions.
This paper addresses the question of managing the existential risk potential of general Artificial Intelligence (AI), as well as the more near-term yet hazardous and disruptive implications of specialised AI, from the perspective of a particular research project that could make a significant contribution to the development of Artificial Intelligence (AI): the Human Brain Project (HBP), a ten-year Future and Emerging Technologies Flagship of the European Commission. The HBP aims to create a digital research infrastructure for brain science, cognitive neuroscience, and brain-inspired computing. This paper builds on work undertaken in the HBP’s Ethics and Society subproject (SP12). Collaborators from two activities in SP12, Foresight and Researcher Awareness on the one hand, and Ethics Management on the other, use the case of machine intelligence to illustrate key aspects of the dynamic processes through which questions of ethics and society, including existential risks, are approached in the organisational context of the HBP. The overall aim of the paper is to provide practice-based evidence, enriched by self-reflexive assessment of the approach used and its limitations, for guiding policy makers and communities who are, and will be, engaging with such questions.
This paper was published in a Special Issue of Futures edited by Dr Adrian Currie, which collected many papers which were originally presented at our first 2016 Cambridge Conference on Catastrophic Risk in 2016.