Human-level AI: Is it Looming or Illusory?

Published on 23 June 2015

The Centre for the Study of Existential Risk’s June 2015 Lecture, with Professor Margaret Boden.

Human-level (“general”) AI is more difficult to achieve than most people think. One key obstacle is relevance, a conceptual version of the frame problem. Another is lack of the semantic web. Yet another is the difficulty of computer vision. So artificial general intelligence (AGI) isn’t on the horizon. Possibly, it may never be achieved. No AGI means no Singularity. Even so, there’s already plenty to worry about—and future AI advances will add more. Areas of concern include unemployment, computer companions, and autonomous robots (some, military). Worries about the (illusory) Singularity have had the good effect of waking up the AI community (and others) to these dangers. At last, they are being taken seriously.

Professor Margaret Boden is a world-leading academic in the study of intelligence, both artificial and otherwise. She is is Research Professor of Cognitive Science at the Department of informatics at the University of Sussex, where her work embraces the fields of artificial intelligence, psychology, philosophy, cognitive and computer science. She was the founding-Dean of Sussex University’s School of Cognitive and Computing Sciences, a pioneering centre for research into intelligence and the mechanisms underlying it — in humans, other animals, or machines. The School’s teaching and research involves an unusual combination of the humanities, science, and technology.

Professor Boden has also been an important participant in the recent international discussions over the long-term impacts of AI. She was a member of the AAAI’s 08/09 Presidential Panel on long-term AI futures (…), and also took part in the recent Puerto Rico conference on the Future of AI, co-organised by CSER. She is therefore uniquely well-placed to discuss near- and long-term prospects in AI.

Subscribe to our mailing list to get our latest updates