Humans are considered to have general intelligence. Endowing machines with artificial general intelligence (AGI) would allow them to adapt to a variety of situations, maximising their potential. Looking ahead, there are likely to be areas of scientific and intellectual progress that will require the types of planning, abstract reasoning, and meaningful understanding of the world that we associate with general intelligence.
The big question is whether the potential given by further generality also implies higher risks and unknowns in comparison with the more specialised, constrained, systems we are already used to. Different issues are raised by a machine translation system, specialised for a task, compared to a versatile personal assistant at home, aimed at covering more and more daily tasks. If this association between risk and generality exists, can we find trade-offs or limitations that ensure flexibility and robustness at the same time?
This project investigates this and other related questions for several scenarios, including the profiles of generalised automation that are both safe and efficient, and the understanding of social dominance through general mind modelling, according to several AGI paradigms. The result is a systematic cartography of the risks of AGI from the perspective of AGI’s definitional concept: generality.
TECHNICAL DESCRIPTION AND GOALS
Many paradigms exist, and more will be created, for developing and understanding AI. Under these paradigms, the key benefits and risks materialise very differently. One dimension pervading all these paradigms is the notion of generality, which plays a central role, and provides the middle letter, in AGI, artificial general intelligence. This project explores the safety issues of present and future AGI paradigms from the perspective of measures of generality, as a complementary dimension to performance. We investigate the following research questions:
- Should we define generality in terms of tasks, goals or dominance? How does generality relate to capability, to computational resources, and ultimately to risks?
- What are the safe trade-offs between general systems with limited capability or less general systems with higher capability? How is this related to the efficiency and risks of automation?
- Can we replace the monolithic notion of performance explosion with breadth growth? How can this help develop safe pathways for more powerful AGI systems?
These questions are analysed for paradigms such as reinforcement learning, inverse reinforcement learning, adversarial settings (Turing learning), oracles, cognition as a service, learning by demonstration, control or traces, teaching scenarios, curriculum and transfer learning, naturalised induction, cognitive architectures, brain-inspired AI, among others.
You can find a more detailed description of the project in the abridged project proposal PDF.
John Burden was appointed to the postdoc position starting 1 July 2020. John has a background in Computer Science and is in the final stages of completing his PhD from the University of York. John also holds a Master’s degree in Computer Science from Oriel College, Oxford
ASSOCIATES AND ADVISORY BOARD:
The Associates strengthen the expertise and outreach in several areas of the project:
- Rob Alexander, University of York, UK.
- Jan Feyereisl, AI Roadmap Institute, CZ.
- Cèsar Ferri, Universitat Politècnica de València, ES.
- Adrià Garriga-Alonso, University of Cambridge, UK.
- Judy Goldsmith, University of Kentucky, KY.
- Emilia Gómez, Centre for Advanced Studies, Joint Research Centre, European Commission, EU.
- Henry Shevlin, University of Cambridge, UK.
- Kristinn Thórisson, Reykjavik University, IS.
The International Advisory Board (IAB) of the project is composed of the following people:
- Allan Dafoe, Yale and FHI, Oxford, UK.
- Virginia Dignum, Delft University of Technology, NL.
- Kenji Doya, Okinawa Institute of Science and Technology, JP.
- Tomas Mikolov, Facebook AI Research, US.
- Vincent Müller, University of Leeds, Anatolia College, UK, GR.
- Ute Schmid, University of Bamberg, DE.
- Murray Shanahan, DeepMind, Imperial College, UK.
- Michael Witbrock, University of Auckland, NZ.
- Yi Zeng, Chinese Academy of Sciences, CN.
EVENTS AND NEWS:
- Paper: "Exploring AI Safety in Degrees: Generality, Capability and Control" (J. Burden, J. Hernandez-Orallo) presented at SafeAI 2020
- Paper: "AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues" (J. Hernandez-Orallo, F. Martinez-Plumed, S. Avin, J. Whittlestone and S. O hEigeartaigh) presented at ECAI 2020
- AISafety 2020. We are co-organising the IJCAI's Workshop on Artificial Intelligence Safety at IJCAI 2020, originally planned for August 2020 but rescheduled to January 2021 due to the COVID-19 pandemic. This will take place in Yokohama, Japan.
- Paper: "Does AI Qualify for the Job? A Bidirectional Model Mapping Labour and AI Intensities" (F. Martínez-Plumed, J. Hernández-Orallo, S. Tolan, A. Pesole, E. Fernández-Macías, E. Gómez) presented at AIES 2020
- Safe AI 2020. We co-organised the AAAI Workshop on Artificial Intelligence Safety at AAAI 2020, which took place in New York, USA, 7 February 2020.
- AI Safety Landscape initiative. We co-organised a second meeting of the AI Safety Landscape initiative at Bloomberg, New York, 6 February 2020.
- AISafety 2019. We co-organised the IJCAI’s Workshop on Artificial Intelligence Safety at IJCAI 2019.
- Also collaborating in a “landscape of AI safety research”, as a second day of the AISafety workshop at IJCAI 2019.
- Paper: "Unbridled mental power" (J. Hernandez-Orallo) Nature Physics, 2019
- Paper: “Surveying safety-relevant characteristics” (J. Hernandez-Orallo, F. Martinez-Plumed, S. Avin, S. O hEigeartaigh) presented at SafeAI@AAAI 2019
- Safe AI 2019. We co-organised the AAAI's Workshop on Artificial Intelligence Safety at AAAI 2019, which took place in Honolulu, Hawaii, 27 January 2019.
- Generality and Intelligence: from Biology to AI. We co-organised this workshop on 5th October, 2018, as part of the MIT-IBM AI Week, jointly with the MIT-IBM Watson AI Lab and the Leverhulme Centre for the Future of Intelligence.