We’re delighted to announce that Professor Stuart Russell (Berkeley) will be giving a CSER public lecture on May 15th.
The lecture is free and open to everyone, but demand is expected to be high so pre-registration is necessary. Registration and details are available here.
The Long-Term Future of (Artificial) Intelligence
Abstract: The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.
Professor Stuart Russell (Berkeley) is one of the biggest names in modern artificial intelligence worldwide. His Artificial Intelligence: A Modern Approach (cowritten with Google’s head of research Peter Norvig) is a leading textbook in the field.
He is also one of the most prominent people thinking about the long-term impacts and future of AI. He has raised concerns about the potential future use of fully autonomous weapons in war. Thinking longer-term, he has posed the question “What if we succeed” in developing strong AI, and suggests that success in this might represent the biggest event in human history. He has organised a number of prominent workshops and meetings around this topic, and this January wrote an open letter calling for a realignment of the field of AI towards research on safe and beneficial development of AI, now signed by a who’s who of field leaders worldwide.
Other relevant articles on or by Professor Russell:
The long-term future of AI (from his own website)
Of myths and moonshine – his response to the Edge.org question on the myth of AI.
Concerns of an artificial intelligence pioneer Interview in Quanta