Meet the Researcher: John Burden

29 September 2020
John Burden

John Burden, Research Associate

@JohnJBurden

John’s research focuses on the challenges of evaluating the capability and generality of AI systems, with a particular emphasis on Artificial General Intelligence. He has a background in Computer Science and is in the final stages of completing his PhD from the University of York. John also holds a Master’s degree in Computer Science from Oriel College, Oxford.

Keywords: Reinforcement Learning, A(G)I Safety, Abstraction


Can you tell us about your pathway to CSER?

I completed my integrated undergraduate and master’s degree in Computer Science at the university of Oxford in 2017. From there I pursued a PhD in Computer Science at the university of York. My PhD research focused on abstraction discovery for Reinforcement Learning agents.

What is your current research focus at CSER?

At CSER I am currently working on the evaluation of AI agents. The notion of “intelligence” in “Artificial Intelligence” is generally not a well understood concept. It is likely a notion that needs to be broken down into smaller components in order to be fully understood. Two notions of particular interest are that of capability (the total amount of “problem” a system can solve) and generality (the range of tasks within the problem that a system can solve effectively). Psychometrics provides groundwork for measuring these factors over specific task domains. Another factor of note is the safety of a system -- how safe it is to deploy the system and whether it poses a danger to other entities. At present we are chiefly working on identifying possible relationships between a system’s capability, generality and safety properties.

What drew you to your research initially and what parts do you find particularly interesting?

Most of our current methods for measuring “intelligence” are overtly biocentric and, additionally, usually do not measure intelligence well across different species. The space of so-called “possible minds” that future machine intelligences may inhabit is so unfathomably large and alien to us as humans. Combine this with the current pace of AI research and progress, while the problem of AI safety is still unsolved, and we have that we don’t really know when an AI system may pose a large-scale or existential risk or what the system will be capable of doing.

What are your motivations for working in Existential Risk?

Homo Sapiens are, compared to any other species that we are currently aware of, special. We can propagate knowledge across generations, and this has allowed us to create society and all of its associated structures. To me, at least, this thrusts upon us a responsibility to both maximise utility and reduce suffering in society, nature and beyond. Crucially, we cannot do this if we go extinct, or if our potential as a species is irrevocably diminished.

What do you think are the key challenges that humanity is currently facing?

Humanity as a whole is largely unable to plan ahead into the long term future. This stems from our current governmental and political structures as well as our tendency as individual human agents to act self-interestedly. Our societies are seemingly layers upon layers of the prisoners’ dilemma. We often forget our duty to our fellow man and are not prepared to make the self-sacrifices that are at times needed for such large scale cooperation. This is why our global responses to existential threats such as climate change and AGI have been so lackluster despite an understanding of the required global response or allocation of funds for both policy and research.

For people who are just getting to grips with Existential Risk, do you have any recommendations for reading, people to follow or events to attend? etc.

Nick Bostrom is one of the pioneers of studying existential risk. His book “Superintelligence” lays out the dangers associated with AGI and how we can begin to think about ways to mitigate these risks. His other research papers also work to classify and identify existential risks more broadly.

Subscribe to our mailing list to get our latest updates