Risks from Artificial Intelligence

Recent years have seen dramatic improvements in artificial intelligence, with even more dramatic improvements possible in the coming decades. In both the short-term and the long-term, AI should be developed in a safe and beneficial direction.

We are part of a community of technologists, academics and policy-makers with a shared interest in safe and globally beneficial artificial intelligence (AI). We are working in partnership with them to shape public conversation in a productive way, foster new talent, and launch new centres like the Leverhulme Centre for the Future of Intelligence. Our research has addressed technical questions relevant to AI safety, the near-term and long-term security implications and governance of AI, and the potential for AI to help mitigate environmental and biological risks. 

The field of AI is advancing rapidly. Recent years have seen dramatic breakthroughs in image and speech recognition, autonomous robotics, language tasks, and game playing. The coming decades will likely see substantial progress. This promises great benefits: new scientific discoveries, cheaper and better goods and services, medical advances. Our research and collaborations have explored applications of AI across a range of global challenges, including combating climate change, pandemic response, and food security. 

AI also raises near-term concerns: privacy, bias, inequality, safety and security. CSER’s research has identified emerging threats and trends in global cybersecurity, and has explored challenges on the intersection of AI, digitisation and nuclear weapons systems.

AlphaGo Zero reached a superhuman level of performance after three days of self-play

Most current AI systems are ‘narrow’ applications – specifically designed to tackle a well-specified problem in one domain, such as a particular game. Such approaches cannot adapt to new or broader challenges without significant redesign. While it may be far superior to human performance in one domain, it is not superior in other domains. However, a long-held goal in the field has been the development of artificial intelligence that can learn and adapt to a very broad range of challenges.

AI in the longer term: opportunities and threats 

As AI systems become more powerful and more general they may become superior to human performance in many domains. If this occurs, it could be a transition as transformative economically, socially, and politically as the Industrial Revolution. This could lead to extremely positive developments, but could also potentially pose catastrophic risks from accidents (safety) or misuse (security).

On safety: our current systems often go wrong in unpredictable ways. There are a number of difficult technical problems related to the design of accident-free artificial intelligence. Aligning current systems’ behaviour with our goals has proved difficult, and has resulted in unpredictable negative outcomes. Accidents caused by more powerful systems would be far more destructive.

On security: advanced AI systems could be key economic and military assets. Were these systems in the hands of bad actors, they might use it in harmful ways. If multiple groups competed to develop it first, it might have the destabilising dynamics of an arms race. Mitigating risk and achieving the global benefits of AI will present unique governance challenges, and will require global cooperation and representation.

Towards safe and beneficial transformative AI

There is great uncertainty and disagreement over timelines for the development of advanced AI systems. But whatever the speed of progress in the field, it seems like there is useful work that can be done right now. Technical machine learning research into safety is now being led by teams at OpenAI, DeepMind, and the Centre for Human-Compatible AI. AI governance research into the security implications is developing as a field. 

The community working towards safe and beneficial superintelligence has grown worldwide. This has come from AI researchers showing leadership on this issue – supported by extensive discussions in machine learning labs and conferences, the landmark Puerto Rico conference, and high-profile support from people like CSER advisors Elon Musk and Stephen Hawking. We work closely with this community, in university labs and in tech companies to develop shared strategies to allow the benefits of AI advances to be safely realised.

More advanced and powerful AI systems will be developed and deployed in the coming years, these systems could be transformative with negative as well as positive consequences, and it seems that we can do useful work right now. While there are many uncertainties, we should dedicate serious effort and thought to laying the foundations for the safety of future systems and better understanding the implications of such advances.

To read about one of CSER's current projects, Paradigms of Artificial General Intelligence and Their Associated Risks, funded by the Future of Life Institute through the FLI International Safety Grants Competition, please click here.

Subscribe to our mailing list to get our latest updates