Risks from Artificial Intelligence

Recent years have seen dramatic improvements in artificial intelligence, with even more dramatic improvements possible in the coming decades. In both the short-term and the long-term, AI should be developed in a safe and beneficial direction.

We are part of a community of technologists, academics and policy-makers with a shared interest in safe and beneficial artificial intelligence (AI). We are working in partnership with them to shape public conversation in a productive way, foster new talent, and launch new centres like the Leverhulme Centre for the Future of Intelligence. Our research has addresesed decision theory relevant to AI safety, and the near-term and long-term security implications of AI. 

The field of AI is advancing rapidly. Recent years have seen dramatic breakthroughs in image and speech recognition, autonomous robotics, and game playing. The coming decades will likely see substantial progress. This promises great benefits: new scientific discoveries, cheaper and better goods and services, medical advances. It also raises near-term concerns: privacy, bias, inequality, safety and security. But a growing body of experts within and outside the field of AI has raised concerns that future developments may pose long-term safety and security risks.

AlphaGo Zero reached a superhuman level of performance after three days of self-play

Most current AI systems are ‘narrow’ applications – specifically designed to tackle a well-specified problem in one domain, such as a particular game. Such approaches cannot adapt to new or broader challenges without significant redesign. While it may be far superior to human performance in one domain, it is not superior in other domains. However, a long-held goal in the field has been the development of artificial intelligence that can learn and adapt to a very broad range of challenges.

Superintelligence

As an AI system becomes more powerful and more general it might become superintelligent – superior to human performance in many or nearly all domains. While this might sound like science fiction, many research leaders believe it possible. Were it possible, it might be as transformative economically, socially, and politically as the Industrial Revolution. This could lead to extremely positive developments, but could also potentially pose catastrophic risks from accidents (safety) or misuse (security).

On safety: our current systems often go wrong in unpredictable ways. There are a number of difficult technical problems related to the design of accident-free artificial-intelligence Aligning current systems’ behaviour with our goals has proved difficult, and has resulted in unpredictable negative outcomes. Accidents caused by a far more powerful system would be far more destructive.

On security: a superintelligent AGI would be an economic and military asset to its possessor, perhaps even giving it a decisive strategic advantage. Were it in the hands of bad actors they might use it in harmful ways. If two groups competed to develop it first, it might have the destabilising dynamics of an arms race.

Current work

There is great uncertainty and disagreement over timelines. But whenever it is developed, it seems like there is useful work that can be done right now. Technical machine learning research into safety is now being led by teams at OpenAI, DeepMind, and the Centre for Human-Compatible AI. Strategic research into the security implications is developing as a field.

The community working towards safe and beneficial superintelligence has grown. This has come from AI researchers showing leadership on this issue – supported by extensive discussions in Machine Learning labs and conferences, the publication of Nick Bostrom’s Superintelligence, the landmark Puerto Rico conference, and high-profile support from people like CSER advisors Elon Musk and Stephen Hawking. This community is developing shared strategies to allow the benefits of AI advances to be safely realised.

Superintelligence could be possible within this century, it could be transformative and have negative consequences, and it could be that we can do useful work right now. Therefore, it is worth taking seriously and for some people to dedicate serious effort and thought to the problem.

Subscribe to our mailing list to get our latest updates